Articles by Alexander Todorov

The upcoming Kiwi TCMS v11 contains new functionality around TestCase parameters and TestRun environments which has an impact on how your final test execution matrix will look like. This article provides detailed information about these features but have in mind that they are still considered to be a technology-preview.

Parameters

Consider a login functionality which accepts email address and password. Possible states for these fields are:

  • Email address:

    • valid - well formed email string, exists in database, access is allowed
    • invalid - malformed email string, should not exist in the DB but this fact is not relevant to the test
    • disabled - well formed email string, exists in database, access is not allowed
  • Password:

    • correct - matches the value in database for the given email address
    • another - matches the value in database which is related to another email address
    • wrong - doesn't match the value in database
    • empty - value is empty string, a special case of wrong
    • invalid - value doesn't conform to predefined rules. May or may not be relevant to login functionality

Depending on how the software under test is put together you can design multiple test cases. Fundamentally however these are the same test case and the above states are input parameters to it!

Definition: TestCase parameters are different input values which do not fundamentally affect the result of a test case! A TestCase with parameters will result into multiple test executions!

In other words you will be executing a parameterized test scenario multiple times with different input values! Inside Kiwi TCMS the actual parameter values during execution are recorded into the TestExecution model which will not change if you modify test case parameter values afterwards!

Definition: TestExecution parameters record a snapshot of TestCase parameters at the time when you intended to execute a particular test scenario!

Environments

A testing environment represents the specifics of where exactly you executed your test suite. Consider this example:

The default desktop environment of Fedora is GNOME, but if you prefer an alternative, you can download installation media which contains slightly differently defaults, e.g. KDE, Xfce, MATE and others, see https://spins.fedoraproject.org.

Regardless of which Fedora variant you choose the expected functionallity of the default desktop experience is the same. However this can only be guaranteed with exhaustive testing across all variants. Check-out the test matrix at https://fedoraproject.org/wiki/Test_Results:Fedora_36_Rawhide_20220118.n.0_Desktop?rd=Test_Results:Current_Desktop_Test#Non_release-blocking_desktops:_x86_.2F_x86_64

Definition: a TestRun environment is a set of attributes which apply to the entire test suite at the time of execution. Usually you expect test results in different environments to be the same!

In Kiwi TCMS environments are represented as named containers of key/value pairs. The same key may have multiple values! They can be found under ADMIN -> Everything else -> Environments.

Because environments are meant to affect the entire test suite they are linked to the TestRun model. When creating a new test run you can select multiple Environment records.

Test matrix generation

The existing behavior in Kiwi TCMS is that when a test run is created there will be only one test execution for every test case which is added inside this test run.

In the Fedora example shown above some of the test cases also have their own parameters, e.g. the QA:Testcase_desktop_app_basic scenario.

Definition: TestRun environment key/values will be combined with TestCase parameter key/values to form the final test matrix! This opens up the possibility for combinatorial test execution generation.

Once parameters and environment(s) are specified you will start seeing multiple test executions for the same test case inside newly created test runs. By default a full-combination test matrix will be created. The other option is to pairwise all key/value records.

Important: test execution generation works only when creating or cloning a test run that contains test cases. This feature still does not work for test cases added after a test run is created!

Environment(s) vs Tag(s)

Inside Kiwi TCMS you can use both environments and tags to annotate test runs. There are 3 important facts that hold true:

  • Tags possess only informational value, they don't influence how you perform testing;
  • Environments possess informational value and govern the final test matrix;
  • Environments which have a single value for each different key are the same as tags!

"Example from #1344"

If we look at this example from Issue #1344 we can make out the following keys:

  • Driver - 2 values
  • API - 2 values
  • Python - 2 values
  • Java - 1 value
  • Eclipse - 1 value
  • Host OS - 1 value
  • Target OS - 1 value
  • Redistributable - 1 value
  • Testing Type - 1 value

Here Driver, API and Python clearly should affect your test matrix. Otherwise there isn't much point in having the different values recorded in the first place. That results in a 8x multiplication factor for every functionality that may be affected/related to these attributes, presumably the entire functionality of the product under test.

Java, Eclipse, Host OS and Target OS carry only informational value but they look like more values could be possible. If that's the case these attributes will also affect the overall test matrix.

Redistributable and Testing Type look like information-only attributes. They don't appear to have any relevance to the test matrix at all. The same information-only effect can be achieved both with environments and with tags.

Practical rules:

  1. Attributes which affect a single test case should be defined as TestCase parameters
  2. Attributes which affect all test cases in a suite should be defined as TestRun environment(s)
  3. One big TestRun is likely the best from organizational and optimizational point of view

You may decide to have multiple smaller test runs, usually with 1 value per environment key, if you think that fits your workflow better. However you may be missing on some optimizations if you choose to do so.

Real life example

To illustrate how all of these new features work let's look at Partitioning custom software RAID test case from Fedora QA. It instructs the tester to install Linux and inside the partitioning screen create a Software RAID partition, format it with a filesystem and assign a mount point! It is expected that once installation is complete the machine will reboot, tester will be able to login as root and the created filesystem will be available!

Factors that could affect this functionality:

  • Raid Level: Fedora supports 7 of them - 0, 1, 4, 5, 6, 10 and linear. These are all different drivers located under /lib/modules/$(uname -r)/kernel/drivers/md

    ./linear.ko.xz
    ./raid0.ko.xz
    ./raid10.ko.xz
    ./raid1.ko.xz
    ./raid456.ko.xz
    
  • Mount Point: / for example is mounted very early in the boot process, /home is mounted much later. / also relates to rescue mode, while /home doesn't. /home, if corrupted, may affect the terminal login process though

  • Encryption: Yes/No. This is stackable on top of the RAID device and "should-just-work". However it is often included into other partitioning test cases in order to discover weird issues and because it is a critical functionality

We may add the actual filesystem type, used to format the RAID block device, e.g. xfs, ext3, ext4, but that's not needed here! Here's how this test case looks in Kiwi TCMS:

"RAID test case"

Remember that Fedora comes with multiple variants for multiple CPU architectures! Of those we'll consider Server and Workstation, which are both available for the aarch64 and x86_64 CPU architectures. Here's how this can be represented inside Kiwi TCMS:

"Fedora variants represented as environment"



Next we need to organize test execution for an upcoming release by creating test run(s) and selecting environment and matrix generation type: "New test run with environment"

The possible outcomes are:

  • 112 test executions: full test matrix between all Fedora variants and all RAID parameters
  • 56 test executions: 4 TR x 14 TE; one TR/variant without environment; RAID parameters are pairwised
  • 16 test executions: all Fedora variants are pairwised together with all RAID parameters

Finally this is how the resulting test run looks like. Notice the 3 boxes icon for each test execution, listing the actual parameters which should be used during testing:

"TR with environment, TE with parameters"

Environment parameters are read-only here b/c they have been copied to all test execution records. It usually doesn't make sense to modify your environment mid-way during test execution. If that's needed or you've made a mistake it's probably easier to create a new test run.

Happy Testing!


If you like what we're doing and how Kiwi TCMS supports various communities please help us!

Project roadmap 2021

Hello testers, this blog post outlines Kiwi TCMS roadmap for 2021 and what we feel is important to us!

roadmap image 2021

Project sustainability

The big goal towards which we are striving is to turn Kiwi TCMS into a sustainable open source project. For now this means three key areas:

1) Team
2) Technical
3) Community & Adoption

Team

Right now we have a core team with 3 members, 3 more members on-boarding and 2 interns. In the past year we weren't successful into turning more people into core-team members. I have seen several problems and core-team will significantly reconsider how we approach & recruit people to join the team, how we on-board and help them so that they can become productive and fully fledged team members.

Long term focus is improving and strengthening core-team which also implies a level of responsibility and performance criteria which core-team members must meet.

Goal: 1 PR/week/person as broad measure of individual performance so that we can operate with a predictable capacity.

Goal: (re)structure internal team processes around candidates and newcomers! Note: These are not public at the moment.

Technical

The areas shown on the picture above will receive more priority.

Goal: complete remaining Telemetry features.

Goal: complete remaining refactoring with major focus in pylint issues, migration to Patternfly v4 and eslint issues.

Goal: improve SSL configuration with strong bias towards Let's Encrypt being configured by default.

Goal: provide support for web hooks so that Kiwi TCMS can be integrated more easily/flexibly with 3rd party systems. We're aiming for Kiwi TCMS to be able to POST webhooks to external URLs and inform them about events in the system.

Community & Adoption

Last year Kiwi TCMS had massive success despite not visiting many events. The open source community spirit is important to us both in terms of technical collaborations and in terms of features & exposure which drives further adoption of Kiwi TCMS downstream.

Goal: complete bug-tracker integration milestone.

Goal: extended GitHub integration via GitHub actions which will report results into our database. We do have other ideas as stretch goals.

Goal: similar to GitHub actions we're looking towards GitLab pipelines and similar integration with GitLab.

Goal: continue our collaboration with Major League Hacking Fellowship program.

Goal: apply for the Google Summer of Code program and work with students if selected.


If you like what we're doing and how Kiwi TCMS supports various communities please help us!

Roadmap status report for 2020

Another year rolls out and despite all difficulties it is by far the strongest one for Kiwi TCMS!

Stats

  • 2 physical events and a few virtual ones
  • 12 releases
  • 24 language translations
  • 683 PRs, most of them closed & merged
  • Reached Issue/PR number 2000
  • Reached 5000 commits
  • Reached 8000 registrations via https://public.tenant.kiwitcms.org
  • Reached 270K downloads via Docker Hub

Status update

From the 2020 roadmap we've established 3 main areas to work on. Their completeness scores are:

1) Team - 30%
2) Technical - 70%
3) Community - 100%

Average score is 65% completion!

Team

Overall the team has stalled its growth and improvement. Contributors which started onboarding a year ago are still under-productive and do not meet our criteria to become core-team members. The average team productivity is far beyond the goal of 1PR/week/person. This is largely due to contributors not being active on their items, very long periods between pull requests and longer than average time for closing pull requests.

The only positive side in this area is that core-team has improved its internal processes, is meeting regularly, discusses issues with members when they arise and relatively quickly spots problems and acts on them.

Technical

The dominating effort this year was refactoring the remaining legacy UI and converting everything to PatternFly. The effect of this is reduced code complexity and improved CodeClimate score/technical debt, removed vendored-in JavaScript dependencies and lots of unused code in favor of using the existing API.

Additional work has been done on closing bugs, implementing some features, integration with new bug tracking systems and improvements around the telemetry feature.

However there is still a lot of work to be done until all telemetry pages are complete. There are also around 30 pylint issues remaining which require internal refactoring and more legacy code cleanup. It's getting there but it's also getting harder.

Community

This area turned out to be our strongest one this year. We started very strong at FOSDEM 2020 and collaborated with multiple communities on plugins, code & translation contributions, adoption of Kiwi TCMS and general partnerships around open source.

Kiwi TCMS got a substantial grant from the Mozilla foundation which helped bootstrap our open source bounty program and internship program.

In May we reached 100K downloads on Docker Hub then in October we've surpassed 200K. Next month we'll reach 300K!

Summary

2020 was definitely a year full with uncertainties and hardship. It was not what we were used to and there were many ideas and lead projects that looked very promising in the beginning of the year but didn't materialize for a multitude of reasons.

Overall Kiwi TCMS, its team and its community did very well and I am confident that next year we can achieve more together!

Happy Testing and Happy New Year!


If you like what we're doing and how Kiwi TCMS supports various communities please help us!

Kiwi TCMS integration with 3rd party bug trackers supports the 1-click bug report feature. However you may want to change how the initial information is structured or even what exactly is written in the initial comment. This article shows how to do this.

The default text used for 1-click bug reports gets compiled based on information present in the TestExecution - Product, Version, TestCase.text, etc. This is encapsulated in the tcms.issuetracker.base.IssueTrackerType._report_comment() method. You may extend the existing bug tracker integration code with your own customizations. In this example I've extended the KiwiTCMS bug tracker implementation but you can provide your own from scratch

# filename: mymodule.py
class ExtendedBugTracker(KiwiTCMS):
    def _report_comment(self, execution):
        comment = super()._report_comment(execution)

        comment += "----- ADDITIONAL INFORMATION -----\n\n"
        #
        # fetch more info from other sources
        #
        comment += "----- END ADDITIONAL INFORMATION -----\n"
        return comment

Then override the EXTERNAL_BUG_TRACKERS setting to include your customizations:

EXTERNAL_BUG_TRACKERS.append('mymodule.ExtendedBugTracker')

and change the bug tracker type, via https://tcms.example.com/admin/testcases/bugsystem/, to mymodule.ExtendedBugTracker.

IMPORTANT

  • Information how to change settings can be found here
  • mymodule.py may live anywhere on the filesystem but Python must be able to import it
  • It is best to bundle all of your customizations into a Python package and pip3 install it into your customized docker image
  • API documentation for bug tracker integration can be found here
  • Rebuilding the docker image is outside the scope of this article. Have a look at this Dockerfile for inspiration

Happy testing!

Hello testers, I have to admit that I made a rookie mistake and deleted the entire email database for the Kiwi TCMS newsletter! And of course we didn't have a backup of this database :-(. Please re-subscribe here and read below if you are interested to know what happened.

Last week, while exploring how to cancel active subscriptions for our deprecated GitHub Marketplace listing I found there is no way to cancel those programatically. So I've compiled a list of email addresses and decided to send subscribers an email asking them to cancel their subscriptions.

For this purpose I decided to import the contacts into Mailchimp because it gives you a better interface to design the actual message, include images in the message body, preview and test the message before it is sent! The import of addresses went fine, new addresses were tagged appropriately to separate them from the rest of the newsletter audience but they were not subscribed to receive emails automatically.

I selected "non-subscribed" option when importing as a second barrier to accidentally emailing people who do not want to receive regular news from us! However it turned out Mailchimp can't send messages to non-subscribed addresses! Maybe that's part of their attempts to be GDPR compliant.

So I decided to delete the freshly imported addresses, import them again and this time tag + subscribe them during the import! When selecting the addresses for deletion I am 99% confident I did filter them by tag first and then selected DELETE! And the entire contacts list was gone!

I've also contacted Mailchimp immediately to ask whether or not the addresses can be restored. Unfortunately they are trying to be super GDPR compliant and claim they don't have this information into their system anymore. And in this particular case we've been relying on the vendor to keep backups on their end so didn't even think about trying to backup this database!

For users who have accounts at https://public.tenant.kiwitcms.org we do have their email addresses but we're not going to automatically re-subscribe them. We've stopped auto-subscribing 2 years ago and also there's no way of telling which addresses were on the list in the first hand.

Please re-subscribe here and I promise we're going to start backing up the newsletter database as well.

Thank you!

Project roadmap 2020

Hello testers, the Kiwi TCMS team sat down together last week and talked about what we feel is important for us during the upcoming year. This blog post outlines our roadmap for 2020!

roadmap image 2020

Project sustainability

The big goal towards which we are striving is to turn Kiwi TCMS into a sustainable open source project. For now this means several key areas:

1) Team
2) Technical
3) Community

Team

Right now we have a core team with 6 newcomers on-boarding. Engineering performance is all over the place with some people contributing too much while others contributing too little. More importantly there is no consistent pace of contributions which makes planning timely completion of technical tasks impossible.

At the moment we do operate as a bunch of disconnected people who happen to talk to each other from time to time.

We are going to adjust our internal processes and how we on-board new members. In fact we did our first "scrum-like" meeting this week and agreed to change our existing practice and strive to become better as a team!

Goal: to have a cohesive team at the end of the year which operates with a predictable capacity.

Goal: 1 PR/week/person as broad measure of individual performance.

Technical

The areas shown on the picture above will receive more priority.

Goal: complete remaining Telemetry features.

Goal: complete bug-tracker integration milestone.

Goal: all pylint issues resolved.

Goal: migrate all remaining legacy templates to Patternfly UI. See patternfly-migration milestone.

Goal: where FE sends AJAX requests to BE views replace with JSON RPC API instead.

Extra: start tackling the JavaScript mess that we have. This depends and is related to Patternfly migration and overall refactoring.

Extra: make it easier for downstream installations to extend and override parts of Kiwi TCMS in order for users to adjust the system to their own needs. The system is pretty flexible as-is but there have been requests, both online and offline, to provide some extra features! We'll start looking into them, likely making partial progress in the next 12 months.

Community

Last year Kiwi TCMS had massive success at every single conference that we've been to. Both project and team have been well received. While we are going to continue being part of various communities around the world we are trying to limit extensive travel and focus on functionality and partnerships which will increase Kiwi TCMS eco-system, make the project even more popular and drive further adoption!

Goal: extended GitHub integration via kiwitcms-github-app plugin.

Goal: release the following test automation framework plugins for Kiwi TCMS:

For more information see test-automation-plugins milestone.

Ongoing: work with our partners from the proprietary and open source worlds. This is hard to quantify and lots of it doesn't actually depend on the team. However we are continuing to talk to them regularly. Expect new feedback to become available under GitHub Issues.

Extra: see what we can do about testing productivity! This has always been part of our mission but we have not been able to produce anything worth sharing. We do have ideas in this space but we are generally looking for partnerships and collaborations. It is very likely that there will not be very much progress on this front because it is hard to define it properly :-(.

Summary

At the end of the day most of these goals compliment each other and help drive all of them to completion. Many of the still on-boarding people have expressed desire to improve their Python & Django skills. Working to resolve issues in the above specific areas will give them this opportunity! I expect they will show good progress on their respective tasks so we can write more about them on this blog.

Happy testing!

Roadmap status report for 2019

Hello everyone, in this article I will outline the progress that the Kiwi TCMS team has made towards achieving the goals on our 2019 roadmap. TL,DR: last year we've made lots of big and visible changes in Kiwi TCMS. This year less so. Progress has been slower than before and not so much visible. Community and team is growing. More contributors are welcome.

Complete the internal refactoring

Status: small progress, needs help

CodeClimate progress is:

  • -60 code smells
  • -55 duplications
  • -50 other issues
  • 4.4% technical debt improvement
  • -240 hrs remaining until issues are fixed

The trend is showing less issues remaining but it has been a slow progress. As we fix the easier items the remaining ones become harder to deal with.

We've done minor work related to fixing issues reported by pylint. Around 150 of them still remain!

We have not done any targeted work to resolve other issues reported by Scrutinizer, remove vendored-in JavaScript libraries, JavaScript refactoring or classification of issues in 3rd party dependencies.

Redesign the UI templates with the help of Patternfly

Status: 60% done, needs help

There are 22 HTML templates remaining to be redesigned (from 59). That's mostly due to internal cleanup and some refactoring! Test plan and Test run pages are the two major templates that still need to be redesigned with Patternfly.

Modernize reporting aka Telemetry

Status: 60% done, in progress, behind schedule

The specs for the new Telemetry system have been defined after taking into account feedback on GitHub issues. Anton Sankov is the leading developer for this feature. So far we have 4 telemetry reports merged: testing break-down, status matrix, execution trends and flaky tests.

There are lots of minor issues or missing functionality in these first iterations (compared to specification). Work continues on the other telemetry use-cases and related items.

Plugins for 3rd party test automation frameworks

Status: good, needs help

UPDATE: no change in last 6 months.

If you'd like to see plugins for more test automation frameworks and/or file formats please checkout the documentation for links and more info.

Redefine bug-tracker integration

Status: 66% complete, in progress, behind schedule

We've been making slow progress on this milestone lately. For more info see https://github.com/kiwitcms/Kiwi/milestone/1

GitHub flow integration

Status: done, awaiting deployment

Our team spent some time making Kiwi TCMS the first open source TCMS available on the GitHub Marketplace. At the end of this year we were able to create a small application that allows further integration and extending the testing workflow to the GitHub platform.

This is waiting on a few more clarifications from GitHub before we deploy but for now it can be considered as done. Future functionality will be tracked and developed directly at https://github.com/kiwitcms/github-app/issues.

Agile integration with Trello

Status: no progress, will drop

This will be dropped from roadmap for the next year until we can get more interest from the community.

Improve engineering productivity

Status: no progress

Looking for external help here. This will stay as a low priority item on our roadmap for 2020 until we can free more resources on the team.

Community

Status: great, on track, needs work

This is our strongest area during this year. We have a strong presence in multiple communities, our event schedule is very busy and we are gaining more recognition every day! Core team hit several big bumps this year and is still recovering with a few more people onboarding.

Kiwi TCMS suffers from the problem that many of our users can't be contributors or simply don't want to!

In short: it is important for us to follow our mission and develop our core team so we can deliver on promises made in our roadmap! That requires a lot of time and effort which reduces short-term productivity.

Happy testing!

Starting with version 7.0 Kiwi TCMS pages displaying URLs to bugs also contain an info icon which shows additional information as tooltip. These are designed to provide more contextual information about the bug. By default the tooltip shows the OpenGraph metadata for that URL. This article will explain how to override this in 2 different ways.

bug details shown

Option #1: using the caching layer

Additional bug information is cached on the application layer. The cache key is the bug URL! By default Kiwi TCMS uses local-memory caching which isn't accessible for external processes but can be reconfigured very easily. This example changes the CACHES setting to use a directory on the file system like so

CACHES = {
    'default': {
        'BACKEND': 'django.core.cache.backends.filebased.FileBasedCache',
        'LOCATION': '/tmp/kiwi-cache',
        'TIMEOUT': 3600,
    }
}

Then you need to poll your 3rd party bug tracker (and/or other systems) and update the cache for each URL

from django.core.cache import cache
from tcms.core.contrib.linkreference.models import LinkReference

for reference in LinkReference.objects.filter(is_defect=True):
    # possibly filter objects coming only from your own bug tracker
    # in case there are multiple trackers in use

    # custom methods to grab more information. Must return strings
    title = fetch_title_from_bug_tracker(reference.url)
    description = fetch_description_from_bug_tracker(reference.url)

    # store the information in Kiwi TCMS cache
    cache.set(reference, {'title': title, 'description': description})

Then execute the Python script above regularly. For example use the following as your cron script

#!/bin/bash
export VIRTUAL_ENV=/venv
export PATH=/venv/bin:${PATH}
cat /path/to/cache_updater.py | /Kiwi/manage.py shell

bug details from customized cache

IMPORTANT

  • Kiwi TCMS expires cache entries after an hour. Either change the TIMEOUT setting shown above or run your script more frequently
  • How to modify default Kiwi TCMS settings is documented here
  • The Python + Bash scripts above don't need to be on the same system where Kiwi TCMS is hosted. However they need the same Python 3 virtualenv and cache settings as Kiwi TCMS does
  • Information about Django's cache framework and available backends can be found here
  • memcached is a supported cache backend option, see here
  • django-elasticache is a backend for Amazon ElastiCache which provides several configuration examples
  • Both django-redis and django-redis-cache are good libraries which support Redis
  • Any 3rd party libraries must be pip3 install-ed into your own docker image

Option #2: extend bug tracker integration

Let's say you are already running a customized Docker image of Kiwi TCMS. Then you may opt-in to extend the existing bug tracker integration code which provides the information shown in the tooltip. In this example I've extended the KiwiTCMS bug tracker implementation but you can even provide your own from scratch

class ExtendedBugTracker(KiwiTCMS):
    def details(self, url):
        result = super().details(url)

        result['title'] = 'EXTENDED: ' + result['title']
        result['description'] += '<h1>IMPORTANT</h1>'

        return result

Then import the new ExtendedBugTracker class inside tcms/issuetracker/types.py like so

index 9ad90ac..2c76621 100644
--- a/tcms/issuetracker/types.py
+++ b/tcms/issuetracker/types.py
@@ -17,6 +17,9 @@ from django.conf import settings

 from tcms.issuetracker.base import IssueTrackerType
 from tcms.issuetracker.kiwitcms import KiwiTCMS  # noqa
+from tcms.issuetracker.kiwitcms import ExtendedBugTracker

and change the bug tracker type, via https://tcms.example.com/admin/testcases/bugsystem/, to ExtendedBugTracker.

bug details extended internally

IMPORTANT

  • ExtendedBugTracker may live anywhere on the filesystem but Python must be able to import it
  • It is best to bundle all of your customizations into a Python package and pip3 install it into your customized docker image
  • ExtendedBugTracker must be imported into tcms/issuetracker/types.py in order for the admin interface and other functions to find it. You may also place the import at the bottom of tcms/issuetracker/types.py
  • API documentation for bug tracker integration can be found here
  • Rebuilding the docker image is outside the scope of this article. Have a look at this Dockerfile for inspiration

NOTE: starting with Kiwi TCMS v8.5 external bug tracker integration classes are listed in the EXTERNAL_BUG_TRACKERS setting. If you are using v8.5 or newer instead of importing ExtendedBugTracker in tcms/issuetracker/types.py you should override the list of available bug tracker integrations:

EXTERNAL_BUG_TRACKERS.append('mymodule.ExtendedBugTracker')

Happy testing!

Hello everyone, in this article I will outline the progress that the Kiwi TCMS team has made towards achieving the goals on our 2019 mission and roadmap. TL,DR: Kiwi TCMS has made progress since January, it's been tough and may not have been very visible. I feel like we've been behind schedule till now! The greatest positive thing has been community and team development!

Complete the internal refactoring

Status: minimal progress, needs help

CodeClimate progress is:

  • -30 code smells
  • -40 duplications
  • -30 other issues
  • 4% technical debt improvement
  • -200 hrs remaining until issues are fixed

This is mostly the result of code reviews and minor fixes, not targeted work.

We have not done any targeted work to resolve other issues reported by Scrutinizer, Pylint, remove vendored-in JavaScript libraries, JavaScript refactoring or classification of issues in 3rd party dependencies.

There are new people onboarding in the team right now and our plan is for them to start grinding at these issues very soon!

Redesign the UI templates with the help of Patternfly

Status: 50% done, needs help

There are 27 HTML templates remaining to be redesigned (from 59). That's mostly due to internal cleanup than targeted refactoring. More work on this item will probably follow towards the end of the year after we get more priority items out of the way and get more of the new team members rolling!

Modernize reporting aka Telemetry

Status: in progress, a bit behind schedule

The specs for the new Telemetry system have been defined after taking into account feedback on GitHub issues. Anton Sankov is the leading developer for this feature. So far we have 2 telemetry reports merged: testing break-down and status matrix. The next one will be execution trends.

There are lots of minor issues or missing functionality in these first iterations (compared to specification). Our plan is to have the major use-cases satisfied first and then work to refine all of the existing telemetry pages.

Plugins for 3rd party test automation frameworks

Status: good, needs help

Until now we have released TAP, junit.xml and native JUnit 5 plugins. There's also a PHPUnit plugin which is more or less complete but unreleased yet. Both JUnit 5 and PHPUnit plugins are developed by external contributors!

We often get asked for plugins for languages and frameworks we don't use or don't even know! Given that our expertise is mostly in Python we will gladly accept your pull requests if you decide to maintain or contribute to one of the plugins. This will also help us get insight into what automation frameworks people are using and how exactly you structure a test automation workflow around Kiwi TCMS.

Checkout the documentation for links and more info.

Redefine bug-tracker integration

Status: no progress

Last week, right after OpenExpo, we did a check-up session and this was one of the areas identified with zero amount of progress. I have a strong preference to work on this feature myself but have not been able to due to various other items that need my attention.

The short version is that I'd prefer to remove all issue tracker specific code and allow the tester to add arbitrary URLs to link to existing bugs. How to do integration (even as simple as publishing a comment in the bug tracker) over a generic interface still eludes me. In the next few weeks I will kick-off this topic with a separate blog post/issue for everyone to comment on.

GitHub flow integration

Status: no progress

Our team spent some time making Kiwi TCMS the first open source TCMS available on the GitHub Marketplace. We will continue this integration effort and flow integration will emerge from that. There's also many things that need to be done to satisfy GitHub's .

Agile integration with Trello

Status: no progress

Improve engineering productivity

Status: no progress

Our mission is to transform testing in your organization by providing the tools for that via Kiwi TCMS. It is astonishing that so far nobody has provided any kind of feedback in Issue #703 wrt improving productivity in their teams!

We have some ideas which have been blocked by lack of resources on the team and refactoring tasks. Because we've adopted this as our mission this is an important item for us and we'll continue working on it as resources allow. Progress is to be expected towards the end of the year.

Community

Status: great, on track, needs work

This is our strongest area during the year so far. We have a strong presence in several communities, our event schedule is busy enough and we are gaining more recognition every day!

  • Hosted project stand at 3/5 conferences with 2 more on-track
  • Won the OpenAward for Best Tech Community
  • Hosted several presentations and workshops with few more on track
  • Found new talent to join the core team: 2 just ready to start contributing, 5 more in training
  • 1 more senior engineer as a mentor. We also have a few independent wanna-be contributors and will be hosting qualification interviews for marketing assistant very soon
  • There are contributions and pull requests coming from users of Kiwi TCMS as well. We'd like to see more of course.
  • There are a couple of open source projects and companies using Kiwi TCMS who are friendly towards the project. We are working with them to get a public endorsement on the website and engage in more technical work together. Of course everyone has limited resources and is very busy :-(
  • Sponsors on OpenCollect are just a few but we didn't have any previously so this is a good sign.

This is the moment to mention that not all is honey and roses in open source land. Kiwi TCMS suffers from the problem that many of our users can't be contributors or simply don't want to!

Manual testers can't program. This is a fact and a good sized chunk of our user base actually performs manual testing. Those that can write automation and probably code decently well may not be familiar with Python and Django. At least in Bulgaria these two aren't very popular, definitely not among testers. That is to say this part of the user-base simply doesn't have the necessary skills to contribute and the majority of what we need is code contribution!

Another (fairly big IMO) group of users are coming from proprietary companies who view open source and Kiwi TCMS as a zero cost option. Something that they take free of charge and use it without ever contributing back. They don't understand nor really care about the open source culture.

To make things worse we receive requests every single day via our private email addresses or questions via IM despite our website clearly stating community engagement rules. On a few occasions we have received very rude comments of the sort "our company demands you fix this", "is this going to be ready this year" (context implying entitlement), etc. To make things more ridiculous we've even received support requests (via contact form) from companies and start-up who get their return address wrong so we can't get in touch directly!

In short: don't demand anything from us unless you are ready to pay for it, work for it yourself or propose a mutually beneficial scenario. We do try to keep the community happy but more importantly follow our mission and develop our core team!

Happy testing!

Image of the award

Kiwi TCMS is the winner at OpenAwards'19 category Best Tech Community! Big thanks to the jury, our contributors and core-team and the larger open source and quality assurance communities who voted for us and supported the project during all of those years.

This award is the best present we could get to mark the 10th anniversary of the project. More news of how we are progressing with current roadmap will follow soon in a separate blog post.

Thank you & happy testing!

On Tuesday I hosted my pylint workshop during the regular Django Bulgaria meetup. This edition was the first which was practice based.

Attendance numbers were low but participation was very good. We managed to create 4 new checkers for Kiwi TCMS:

Many thanks to all contributors. These new checkers have discovered quite a few new issues with Kiwi TCMS so this is an area which our team is going to improve.

Those who missed the workshop will be able to catch up one of the next editions:

  • 26-29 August, GUADEC, Thessaloniki - TBC (presentation)
  • end of September, Python meetup, Zagreb - TBA
  • 03-05 October, PyCon Balkan, Belgrade - TBC
  • 11-13 October, HackConf, Sofia
  • 15-17 October, TestCon Europe, Vilnius - TBC (backup presentation)
  • 23-25 October, Testwarez, Ossa, Poland - TBC (presentation)
  • 14-15 November, Software Engineering Conference Russia, Saint-Petersburg - TBC
  • 20-22 November, ConTEST, New York - TBC (workshop and/or presentation)

Happy testing!

In release notes for v6.5 we announced several plugins which will fetch test names and execution results from your automated test suite.

Plugins can be controlled via environment variables which will affect how test results are recorded in the Kiwi TCMS database! This blog post is an introduction how that works and what you can do with it! For this purpose I will be using the plugin-demo repository, which simulates real development work.

Full documentation and list of available plugins is available in chapter Automation Frameworks Plugins!

Always create new TestRun by default

The default behavior is always to create a new TestRun if controlling variables are not overridden! Product name, version and build will receive default values if tests are running in Travis CI or Jenkins. For example Travis Build #2 for commit d455fb4 creates TR-12 and TP-10!

Product=kiwitcms/plugin-demo
Version=d455fb42fb7c2aedadfd5f66de7d131109c03350
Build=2

Next we convert the README file from Markdown to reStructuredText which triggers Travis Build #3 for commit 418b80b. This build again creates a new TestRun and new TestPlan for it. Respectively TR-14 and TP-12!

Product=kiwitcms/plugin-demo
Version=418b80b3bbb65a799f695dc59d488c76f560fa2b
Build=3

Important: we can see that version is different which will affect how artifacts are organized in Kiwi TCMS, possibly affect how you will report status to stakeholders!

Override ENV variables for more control

Let's say the QA team has decided that all test results must be reported under the same TestPlan. This means version must be the same between various builds in Travis CI! To control this we export TCMS_PRODUCT_VERSION=master in CI before executing the TAP plugin! Checkout the commit on GitHub to see how it is done!

This triggers Travis Build #4 for commit e484e59. Because this is the first time where Version == master the plugin creates TP-14 and reports the results under TR-16.

Product=kiwitcms/plugin-demo
Version=master
Build=4

Right after that I realized we can make this configuration a bit more generic because our team is planning to support multiple versions of the product and development will be done in separate branches! Travis Build #5 for commit f1f2878 still ends up with Version == master because we are still working on the master branch! That is to say, the product is in active mode of development.

Results are reported in TR-18 which is again part of TP-14.

Product=kiwitcms/plugin-demo
Version=master
Build=5

Travis Build #6 for commit df6153b adds the new functionality README badges and reports test results in TR-20 which is again part of TP-14.

More ENV overrides

While giving status reports back to stakeholders and developers the information that we have in the TestRun is Build number! This follows the numbering scheme in Travis CI (or Jenkins job number) and isn't very useful.

Let's define TCMS_BUILD to be the first 7 characters of the commit hash! When QA tells devel that something isn't working and redirects them to the TestRun they can immediately use the Build information and git checkout the offending variant of the product for investigation.

This results in Travis Build #7 for commit bf75d0a, TR-22, TP-14.

Product=kiwitcms/plugin-demo
Version=master
Build=bf75d0a

Report results in pre-existing TestRun

There are many reasons you may want to do this:

  • include both manual and automation tests for the same build;
  • mix results from multiple test suites for the same build, e.g. unit, functional, performance
  • mix results from multiple but similar platforms in the same build, e.g. cross-platform application for iOS and Android

To do so I've created TR-24 beforehand and configured TCMS_RUN_ID=24 in my CI environment. TR-24 also contains TC-57: Verify we can report results from several plugins into the same TR. this was created and added via the web interface.

Note: mixing additional test cases can be done either before or after automation results are reported with the plugin!

Important: when reporting results to an existing TestRun Kiwi TCMS plugins don't care in which TestPlan this TR is! In theory it is possible to report the results for Product=kiwitcms/plugin-demo into any TP/TR pair! However we don't want to do this crazy thing and instead I've created TR-24 inside the already existing TP-14!

Note: because I don't know what is the git commit beforehand I've configured TR-24 with Build=unspecified. If I wanted I could update this with the correct value once I know the commit hash for the related changes I am testing.

Important: the plugin-demo repository uses both kiwitcms-tap-plugin and kiwitcms-junit.xml-plugin at the same time! They differ in the way test names are compiled! Both are reported in TR-24. See TC-57 text for information how to distinguish between the two.

See Travis Build #8 for commit 85ad939, TR-24, TP-14.

Product=kiwitcms/plugin-demo
Version=master
Build=unspecified

Also check-out comments in TR executions to see when and who had executed the test case.

So far there have been some tests which were failing (although Travis reports PASS) so I decided to fix them. Travis Build #9 for commit a25b384 is still configured with TCMS_RUN_ID=24. This means results will be reported in TR-24, effectively overwriting previous results.

Check-out Change Log under each individual execution and you will see several time stamps when status was updated! In other words, as long as TCMS_RUN_ID is pointing to an existing TestRun this TR will keep the results of the last test suite execution!

Continue development, restore ENV configuration

Travis Build #10 for commit c4f1ae5 removes TCMS_RUN_ID! Results are reported in TR-25, TP-14.

Product=kiwitcms/plugin-demo
Version=master
Build=c4f1ae5

Branch out for an LTS version

Our team has decided to make the first LTS release of the product. We branch out into lts-v0 branch in git and cut the first NVR. This results in Travis Build #11 for commit 9f1ef71 TR-27, TP-16.

Product=kiwitcms/plugin-demo
Version=lts-v0
Build=9f1ef71

Because this is the first time we are running tests for this product version we end up with the newly created TP-16!

Note: see how version was populated with the correct value for the git branch! This is because my CI config uses TCMS_PRODUCT_VERSION=$TRAVIS_BRANCH and no further changes were required! I made this change back in Travis Build #5 anticipating what will come in the future!

The product is now live and customers have reported critical bugs for it: URLs for the badges in README are wrong! I fix those and add more tests of course, see: Travis Build #12 for commit 2d72754, TR-29 for TP-16.

Product=kiwitcms/plugin-demo
Version=lts-v0
Build=2d72754

TR-29 contains the new TC-58!

cherry-pick between branches

In the lts-v0 branch customers have identified a serious issue. It doesn't exist on master but the test is still valid so I cherry-pick it! In git you can backport or forwardport very easily. Regardless of the direction Kiwi TCMS plugins will record the test results for you.

See Travis Build #13 for commit 31ae5b3, TR-31 for TP-14.

We can see that TC-58, which was originally implemented on the lts-v0 branch is now present!

Summary

This is a very basic example of how you can organize collection and reporting for your automation test suite results with Kiwi TCMS. The links here only refer to artifacts created by kiwitcms-tap-plugin while in the DB we keep others as well.

There are feature requests for more ENV variables which will allow you to control TestPlan creation and child/parent relationship between test plans. See https://github.com/kiwitcms/tap-plugin/issues and vote with a :+1: reaction to help us plan for these features.

Kiwi TCMS automation framework plugins are nothing more than result parsers which talk back to a database. It is up to you to decide how to organize the collection of test results and how to report on them later, when the need arises.

Future installments in this post series will take a look at workflows with feature branches and pull requests and discuss possible organization scenarios. You are welcome to share your ideas in the comments below.

Happy testing!

Hello testers, Kiwi TCMS has taken on a brave new mission! We would like to transform the testing process by making it more organized, transparent & accountable for everyone on your team. Our goal is to improve engineering productivity and participation in testing. The following blog post outlines how we would like to achieve this and what goals we put before ourselves for this year.

Complete the internal refactoring

Last year we took on the challenge to bring a legacy code base up to modern coding standard. We did not complete that effort but made very good progress along the way. This is not a small task and that's why our team will continue with it this year.

CodeClimate report

  • CodeClimate: 0 issues, 0% technical debt, health score A
  • Scrutinizer: only A and B type issues
  • Pylint: 0 issues
  • Remove vendored-in Handlebars, jQuery, jQuery-UI and TableDnD JavaScript libraries in favor of existing npm dependencies
  • Front-end uses the existing JSON-RPC instead of backend views that are only used for AJAX requests. Tip: these are usually accessed via postToURL() and jQ.ajax() on the front-end
  • Inspect and classify all 3rd party issues reported from Coverity and Bandit. Report and fix what we can, ignore the rest that do not affect Kiwi TCMS.

Redesign the UI templates with the help of Patternfly

There are 59 templates remaining to be converted to a modern look and feel. Along with them comes more refactoring and even redesign of the existing pages and the workflow around them. Together with refactoring this will make Kiwi TCMS easier to use and also to maintain.

Modernize reporting

We are planning to remove the existing reports feature because they are not well designed. We will re-implement existing functionality that our community finds useful, add new types of reports (incl. nicer graphics and UI) and make it possible for the reporting sub-system to be more easily extendable.

Phase out is planned to begin after 1st March 2019! Until then we are looking for your feedback. Please comment in Issue #657!

Plugins for 3rd party test automation frameworks

These will make it easier to collect results from automated test suites into Kiwi TCMS for later analysis. Instead of creating scripts that parse the results and talk to our API you will only have to install an additional package in your test environment and configure the test runner to use it! Automation test results will then appear inside Kiwi TCMS.

If you would like to use such functionality leave your vote inside GitHub issues! In case you would like to write a test-runner plugin you can find the specification here.

Redefine bug-tracker integration

Question: Does Kiwi TCMS integrate with JIRA?

Answer: Well, it does. How exactly do you want to integrate?

... silence ...

The following dialog happens every time someone asks me about bug-tracker integration, especially with JIRA. The thing is integration is a specified set of behavior which may or may not be desired in a particular team. As of now Kiwi TCMS is able to open a URL to your bug-tracker with predefined field values, add comments to bug reports and report a simple summary of bugs inside a TestRun.

We recognize this may not be enough and together with the community we really need to define what bug tracker integration means! The broader domain of application lifecycle management tools (of which TCMS is a sub-set) have an integrated bug tracking system. We can add something like this and save you the trouble of using JIRA, however many teams have already invested in integrating their infrastructure or just like other tools. For example we love GitHub issues and our team regularly makes public reports about issues that we find internally!

GitHub flow integration

Developers have their GitHub PR flow and if they have done the job of having unit tests then they will merge only when things are green! This leaves additional testing efforts kind of to the side and doesn't really help with transparency and visibility. I'm not going to mention having an automatically deployed staging environment for every change because very few teams actually have managed to do this effectively.

Kiwi TCMS statuses on GitHub PR

  • Goal: Figure out how Kiwi TCMS can integrate with GitHub flow and bridge the gap. Please share and +1 your wildest ideas in Issue #700.
  • Follow up: depending on the results in #700 we will follow with other goals and sub-tasks

Agile integration with Trello

Speaking of modern engineering flow is your team truly agile? When and how do you plan your testing activities ? Before the devel sprint or afterwards? How many testers take part in refining product backlog and working on user stories?

Similar to GitHub flow lots of teams and open source projects are using Trello to effectively organize their development process. Testing should not be left behind and Kiwi TCMS may be able to help.

  • Goal: Figure out how Kiwi TCMS fits into the overall devel-test-planning process for agile teams and what we can do to make this easier for testers. Please share and +1 your wildest ideas in Issue #701
  • Follow up: depending on the results in #701 we will follow with other goals and sub-tasks

Improve engineering productivity

What makes a test engineer productive when they need to assess product risk and new features, when mapping product requirements documents (PRD) to test plans and test cases, when collaborating on user stories and behavior specification ? What makes developers, product owners, designers and other professionals productive when it comes to dealing with testing ?

For example consider the following workflow:

  • Company has idea for a new product
  • In case this is a big product it may have its own mission, i.e. what kind of problem is it trying to solve and for which group of customers
  • Product backlog is then created which outlines features that map to the product mission
  • Then the team, together with test engineers perform example mapping and discuss and refine the initial feature requirements. User stories are created
  • Behavior specification may also be created
  • Test plans and test cases are the immediate product of BDD specs and desired user stories

Later we iterate through the sprints and for each sprint something like this happens:

  • Desired product features are planned for development. They must be complete at least in terms of requirements, specs and tests
  • Devel writes code, maybe some unit tests, testers can also write automated tests and/or manually verify the current state of the feature being developed
  • Testing, including exploratory is performed before feature is merged
  • Rinse and repeat

Devel is also part of testing, right? Product owners, UX and interaction designers as well. Producing quality software product is a team effort!

In every step of the way Kiwi TCMS can provide notification wizards, guidelines and/or documentation for best practices, facilitate tooling, e.g. to write user stories and assess them or map out an exploratory testing session, etc. The list of ideas is virtually endless. We can even go into deep learning, AI and blockchain but honestly who knows how to use them in testing ?

Our team is not quite sure how this goal will look like 3 months from now but we are certain that testing needs to happen first, last and all the time during the entire software development lifecycle. By providing the necessary functionality and tools in Kiwi TCMS we can boost engineering productivity and steer the testing process in your organization into a better, more productive direction which welcomes participation from all engineering groups.

Let's consider another point of view: testing is a creative activity which is benefited by putting your brain into a specific state of mind! For example Gherkin (the Given-When-Then language) has the benefit of forcing you to think about behavior and while doing so you are vocalizing the various roles in the system, what kind of actions are accepted and what sort of result is expected! Many times this will help you remember or discover missing scenarios, edge cases and raise even more questions!

Crazy ideas, brain dumps and +1 as always are welcome in Issue #703.

Community

Coding alone is not fun! Here's what you can do to help us:

We are also looking to expand our core team and the list of occasional contributors. The following are mostly organizational goals:

  • Goal: participate in 5 conferences with a project stand
  • Goal: define how we find, recruit and onboard new team members. The foundation is already set in TP-3
  • Goal: clearly mark GitHub issues which are suitable for external contributors which don’t want to spend lots of time learning how Kiwi TCMS works under the hood. We're going to tag all such issues with the GitHub help wanted label

Development policy

Our team will be working on areas related to the goals above. A +1 reaction on GitHub issues will help us prioritize what we work on!

GitHub +1

Bug fixes and other issues will be occasionally slipped into the stream and pull requests from non-team contributors will be reviewed and merged in a timely fashion.

There is at least 1 full day of work that goes behind the scenes when a new version is officially released: compile changelog, build images and upload them, create blog post and newsletter announcement, share on social media, etc. We also deploy on our own Kiwi TCMS instance as a stop-gap measure before making everything public!

New PyPI tarballs and Docker images will be released every few weeks as we see fit, this has been our standard process. We try to align releases with Django's release schedule and try to cut a new version when there are known security vulnerabilities fixed. However we can't guarantee this will always be the case!

If you are in a hurry and need something quickly the best option is to send a pull request, build your own Docker image from source and maybe consider sponsoring us via Open Collective!

Happy testing!

Roadmap status report for 2018

Hello everyone, in this article I will outline the progress that the Kiwi TCMS team has made towards achieving the goals in our 2018 roadmap (mid-year update here). TLDR; goals are completed at 62%. Refactoring legacy code is showing good results, less so on the front-end side and there are items still in progress!

Make code easier to maintain

Status: good progress

Initially CodeClimate reported a "D" rating with 600+ code smells and 600+ duplications and a 1 year estimation to resolve these. We're now down to "C" rating with 171 smells and 203 duplications.

The level of technical debt has dropped from 32.5% down to 17.7% and we have removed around 14000 lines of Python code and 8000 lines of JavaScript code without losing significant functionality.

Checkout the stats for more info!

Use pylint and pylint-django

Status: almost finished

Both pylint and pylint-django have been integrated into our CI workflow. There are even some custom built plugins that we use. The number of issues reported is down to 100 from 4000+ initially. These are predominantly fixme comments which are also in parts of the code that are scheduled for removal and refactoring.

Render HTML, return JSON

Status: moderate progress

Several views were modified to return pure JSON but we've not done any targeted work to resolve this issue. A number of other views have been removed in favor of using the existing JSON-RPC layer.

This is an internal refactoring effort which isn't very visible from the outside. This is also one of the factors contributing to the high number of removed source code.

Submit forms, post JSON, GET clean URLs

Status: no progress

Not much has been done in this area except the occasional refactoring to JSON-RPC.

API layer

Status: complete

Documentation

Status: moderate progress, dropped

All RPC methods have been documented! The rest of the internals will be documented as we go along.

No vendored JavaScript libraries

Status: good progress

We still carry around jQuery, jQuery-UI and Handlebars.js. They will be removed once the pages using them are converted to use the Patternfly widgets library.

Less HTML templates with better organization

Status: moderate progress

There are still over 50 HTML templates in tcms/templates/ that need to be refactored into Patternfly. We've been working on them one at a time and will focus more on this effort in the next couple of months.

Modern interface with Patternfly

Status: moderate progress

Some of the pages have been converted to use Patternfly. The most important pages that still have a different look and feel are TestPlan view, TestCase view and TestRun view. These are also the hardest to convert because they have lots of tabs/components which pull information from various places. Our goal is to create reusable widgets for the various components (e.g. a list of TestCases) and then include these components into several different templates to minimize code duplication.

JavaScript updates and front-end testing

Status: moderate progress

A number of JavaScript functions have been refactored and removed during the past few releases but there are still thousands of lines of code left to deal with. This effort is mostly happening in parallel with the Patternfly redesign. We still don't have anything to test front-end JavaScript functionality!

Community efforts

Status: good progress

We are seeing a steady stream of new users registered on https://public.tenant.kiwitcms.org and there are several active contributors on GitHub. Most of our translators are very active and keep their respective languages fresh and up to date!

Kiwi TCMS was represented at OSCAL Tirana, DjangoCon Heidelberg, PyCon Prague, HackConf Sofia, PiterPy St. Petersburg and OpenFest Sofia. We've also been approved for a project stand at FOSDEM 2019 so watch this blog for more news.

Happy testing!

Test runner plugin specification

Happy Monday testers! Kiwi TCMS needs your help! We are looking for developers who wish to create plugins for popular test runners that will record test results in Kiwi TCMS! Initially we are looking for plugins for Python's unittest, Django and JUnit!

What is a test runner?

When working with automated testing you have several components:

  • A test module (or test script), e.g. test_models.py which contains tests for your software;
  • A test framework, e.g. Python nose, which provides the structure for test classes and methods and assert methods to use;
  • A test runner, which knows how to discover your test scripts, load them, execute the testing scenarios inside of them and report the results.

Very often all of the components above live together inside the testing framework but don't need to. For example the standard unittest module in Python provides a test runner but there are also nose and py.test and Django provides its own test runner that knows how to work with the database.

Workflow organization

Once you agree to writing a plugin we are going to create a separate GitHub repository where you will be granted write privileges making you an independent contributor to the Kiwi TCMS project!

Design and architecture of the plugin is up to you, following the practices established by the testing framework in question. You will also have to create a test suite for your plugin based on the specification below.

You are expected to use public.tenant.kiwitcms.org while working on the plugin and afterwards. This is known as eating your own dog food!

For Python we provide the tcms-api module which already takes care of the communications layer. For other languages you will have to create this layer or depend on other open source libraries that provide a XML-RPC or JSON-RPC client!

You can use this gist for inspiration!

Behavior Specification

Please use the comments section to discuss unclear behavior and missing scenarios!

    SUMMARY: Will use configuration file if it exists
    GIVEN: the file ~/.tcms.conf exists
    WHEN: plugin initializes
    THEN: the plugin will log an info message, read the file and
    THEN: configure TCMS_API_URL, TCMS_USERNAME, TCMS_PASSWORD
          variables with the respective values


    SUMMARY: Will use ENVIRONMENT if configuration file doesn't exist
    GIVEN: the file ~/.tcms.conf does not exists
    WHEN: plugin initializes
    THEN: the plugin will read configuration from environment and configure
          the following variables/class members:
          TCMS_API_URL, TCMS_USERNAME and TCMS_PASSWORD


    SUMMARY: Will exit if TCMS_API_URL not configured
    GIVEN: TCMS_API_URL variable is empty
    WHEN: plugin initializes
    THEN: log a warning message and exit
    AND: depending on the test runner framework set exist status 1


    SUMMARY: Will exit if TCMS_USERNAME not configured
    GIVEN: TCMS_USERNAME is empty
    WHEN: plugin initializes
    THEN: log a warning message and exit
    AND: depending on the test runner framework set exist status 1


    SUMMARY: Will exit if TCMS_PASSWORD not configured
    GIVEN: TCMS_PASSWORD is empty
    WHEN: plugin initializes
    THEN: log a warning message and exit
    AND: depending on the test runner framework set exist status 1


    SUMMARY: Will re-use existing TestPlan if configured
    GIVEN: TCMS_RUN_ID environment variable is not empty
    WHEN: plugin initializes
    THEN:  this will be the Current_TestRun record to which the plugin is
           going to add test execution results
    AND: Current_TestPlan document in which the plugin will
           search for test cases becomes Current_TestRun.plan


    SUMMARY: Will create new TestPlan & TestRun if TCMS_RUN_ID not configured
    GIVEN: TCMS_RUN_ID environment variable is empty
    THEN: plugin will create a new TestPlan in Kiwi TCMS with attributes:
        name='Automated test plan for %(product)'
        product='%(product)'
        product_version='%(version)'
        type='Unit'
    WHERE: %(product) is a placeholder for TCMS_PRODUCT==TRAVIS_REPO_SLUG==JOB_NAME
           %(version) is a placeholder for TCMS_PRODUCT_VERSION==TRAVIS_COMMIT==TRAVIS_PULL_REQUEST_SHA==GIT_COMMIT
    THEN: plugin will crate a new TestRun in Kiwi TCMS with attributes:
        summary='Automated test run ....'
        plan=Current TestPlan
        build='%(build)'
        manager=TCMS_USERNAME
    WHERE: %(build) is a placeholder for TCMS_BUILD==TRAVIS_BUILD_NUMBER==BUILD_NUMBER
    Environment variables are specified in:
    https://docs.travis-ci.com/user/environment-variables#default-environment-variables
    https://wiki.jenkins.io/display/JENKINS/Building+a+software+project#Buildingasoftwareproject-belowJenkinsSetEnvironmentVariables

    SUMMARY: Will not create duplicate Product, Version & Build if they already exist
    GIVEN: TCMS_RUN_ID is not configured
    AND: %(product) exists
    AND: %(version) exists
    AND: %(build) exists
    WHEN: plugin tries to auto-create TestPlan and TestRun
    THEN: plugin will re-use %(product), %(version) and %(build) from the database
    AND: not try to auto-create them


    SUMMARY: Will auto-create Product, Version & Build if they don't exist
    GIVEN: TCMS_RUN_ID is not configured
    AND: %(product) doesn't exist
    AND: %(version) doesn't exist
    AND: %(build) doesn't exist
    WHEN: plugin tries to auto-create TestPlan and TestRun
    THEN: %(product), %(version) and %(build) be created automatically


    SUMMARY: Unit test names are added to TestPlan
    GIVEN: we have good plugin configuration
    WHEN: plugin loops over unit tests emitted by the test runner
    THEN: plugin will check Current_TestPlan for a TestCase with the same name
    AND: if test case doesn't exist in Current_TestPlan
    THEN: it will be added to Current_TestPlan
    hint: it is probably best to process all unit test results at the end!


    SUMMARY: Unit test names are added to TestRun
    GIVEN: we have good plugin configuration
    WHEN: plugin loops over unit tests emitted by the test runner
    THEN: plugin will check Current_TestRun for a TestCaseRun object which matches
          the current unit test name
    hint: (or Current_TestCase object from previous scenario, depending on implementation)
    AND: if such TestCaseRun doesn't exist in Current_TestRun
    THEN: it will be added to Current_TestRun
    hint: it is probably best to process all unit test results at the end!


    SUMMARY: Current_TestRun is updated with unit test results
    GIVEN: we have good plugin configuration
    WHEN: plugin loops over unit tests emitted by the test runner
    THEN: plugin will check Current_TestRun for a TestCaseRun object which matches
          the current unit test name
    hint: (or Current_TestCase object from previous scenario, depending on implementation)
    AND: if TestCaseRun object exists in Current_TestRun
    THEN: its status will be updated with the execution result coming from the test runner
    hint: it is probably best to process all unit test results at the end!

Happy testing!

A friend from Red Hat sent me an email asking about Kiwi TCMS performance so I did an experiment to establish a baseline. For API requests I got 7.5 req/sec or 130 msec/req which is 1.5x slower than GitHub!

I used perf-script (see here) to measure that. The script takes the first 250 test cases from our test suite and on every execution creates a new TestPlan (1 API request), then creates new test cases (250 requests), adds cases to test plan (250 requests), creates new product build (1 API request), creates new TestRun (1 API request), adds test cases to test run (250 requests) and finally updates the statuses (250 requests).

A total of 1003 API requests are sent to Kiwi TCMS every time you start this script! An example is available at TR #567!

On localhost, running the development server (./manage.py runserver) with an SQLite database I got:

$ time python perf-script

real    2m6.450s
user    0m1.069s
sys     0m0.331s

$ time python perf-script

real    2m7.472s
user    0m1.057s
sys     0m0.342s

$ time python perf-script

real    2m9.368s
user    0m1.072s
sys     0m0.351s

$ time python perf-script

real    2m9.197s
user    0m1.050s
sys     0m0.353s

This measures at 120 msec/req or 7.85 req/sec!

public.tenant.kiwitcms.org is running on an AWS t2.micro instance (via docker-compose) with the default centos/mariadb image! No extra settings or changes. I used the same computer over a WiFi connection and a pretty standard home-speed Internet connection. Times are:

$ time python perf-script

real    2m18.983s
user    0m1.175s
sys     0m0.095s

$ time python perf-script

real    2m25.937s
user    0m1.156s
sys     0m0.108s

$ time python perf-script

real    2m24.120s
user    0m1.102s
sys     0m0.098s

$ time python perf-script

real    2m21.521s
user    0m1.154s
sys     0m0.091s

This measures at 140 sec/req or 7.05 req/sec!

Note: since I am using Python 3.6 I had to modify the file /opt/rh/rh-python36/root/lib64/python3.6/ssl.py to read:

# Used by http.client if no context is explicitly passed.
_create_default_https_context = _create_unverified_context # this disables HTTPS cert validation

The issue has been reported in RHBZ #1643454

Happy testing!

Kiwi TCMS team updates

I am happy to announce that our team is steadily growing! As we work through our roadmap, status update here, and on-board new team members I start to feel the need for a bit more structure and organization behind the scenes. I also wish for consistent contributions to the project (commit early, commit often) so I can better estimate the resources that we have!

I am also actively discussing Kiwi TCMS with lots of people at various conferences and generate many ideas for the future. The latest SEETEST in Belgrade was particularly fruitful. Some of these ideas are pulling into different directions and I need help to keep them under control!

Development-wise sometimes I lose track of what's going on and who's doing what between working on Kiwi TCMS, preparing for conferences and venues to promote the project, doing code review of other team members, trying not to forget to check-in on progress (especially by interns), recruiting fresh blood and thinking about the overall future of the project. Our user base is growing and there are days where I feel like everything is happening at once or that something needs to be implemented ASAP (which is usually true anyway)!

Meet Rayna Stankova in the role of our team coach! Reny is a director for Women Who Code Sofia, senior QA engineer at VMware, mentor with CoderDojo Bulgaria and a long-time friend of mine. Although she is an experienced QA in her own right she will be contributing to the people side of Kiwi TCMS and less so technically!

Her working areas will be planning and organization:

  • help us (re)define the project vision and goals
  • work with us on roadmaps and action plans so we can meet the project goals faster
  • help us (self) organize so that we are more efficient, including checking progress and blockers (aka enforcer) and meet the aforementioned consistency point
  • serve as our professional coach, motivator and somebody who will take care of team health (yes I really suck at motivating others)

and generally serving as another very experienced member of the team!

We did a quick brainstorming yesterday and started to produce results (#smileyface)! We do have a team docs space to share information (non-public for now, will open it gradually as we grow) and came up with the idea to use Kiwi TCMS as a check-list for our on-boarding/internship process!

I don't know how it will play out but I do expect from the team to self-improve, be inspired, become more focused and more productive! All of this also applies to myself, even more so!

Existing team members progress

Last year we started with 2 existing team members (Tony and myself) and 3 new interns (Ivo, Kaloyan and Tseko) who built this website!

Tony is the #4 contributor to Kiwi TCMS in terms of number of commits and is on track to surpass one of the original authors (before Kiwi TCMS was forked)! He's been working mostly on internal refactoring and resolving the thousands of pylint errors that we had (down to around 500 I think). This summer Tony and I visited the OSCAL conference in Tirana and hosted an info booth for the project.

Ivo is the #5 contributor in terms of numbers of commits. He did learn very quickly and is working on getting rid of the remaining pylint errors. His ability to adapt and learn is quite impressive actually. Last month he co-hosted a git workshop at HackConf, a 1000+ people IT event in Sofia.

Kaloyan did most of the work on our website initially (IIRC). Now he is studying in the Netherlands and not active on the project. We are working to reboot his on-boarding and I'm hoping he will find the time to contribute to Kiwi TCMS regularly.

From the starting team only Tseko decided to move on to other ventures after he contributed to the website.

Internship program

At Kiwi TCMS we have a set of training programs that teach all the necessary technical skills before we let anyone actively work on the project, let alone become a team member.

Our new interns are Denitsa Uzunova and Desislava Koleva. Both of them are coming from Vratsa Software Community and were mentors at the recently held CodeWeek hackathon in their home city! I wish them fast learning and good luck!

Happy testing!

In this blog post I will show more ways to customize Kiwi TCMS by adding logging capabilities to the API backend. Indeed this is a feature that our team deemed not required for upstream and was removed in PR #436.

Start by creating the following directory structure:

    api_logging/
        __init__.py
        handlers.py
        models.py

This is a small Django application that will log every call to the API backend. Each file looks like this:

    # models.py contains DB schema for your new table
    from django.db import models
    from django.conf import settings

    class ApiCallLog(models.Model):
        executed_at = models.DateTimeField(auto_now_add=True)
        user = models.ForeignKey(settings.AUTH_USER_MODEL, null=True, blank=True,
                                 on_delete=models.CASCADE)
        method = models.CharField(max_length=255)
        args = models.TextField(blank=True)

        def __str__(self):
            return "%s: %s" % (self.user, self.method)

Then

    # handlers.py overrides the RPC handlers coming from django-modernrpc
    from modernrpc import handlers

    from django.conf import settings
    from django.contrib.auth import get_user_model

    from .models import ApiCallLog

    def log_call(request, method_name, args):
        """ Log an RPC call to the database or stdout in DEBUG mode. """
        request_user = request.user
        if not request_user.is_authenticated:
            # create an anonymous user object for logging purposes
            request_user, _ = get_user_model().objects.get_or_create(
                username='Anonymous',
                is_active=False)

        if method_name is None:
            method_name = '--- method_name missing ---'

        if settings.DEBUG:
            print('API call:: user: {0}, method: {1}, args: {2}'.format(
                request_user,
                method_name,
                args))
        else:
            ApiCallLog.objects.create(
                user=request_user,
                method=method_name,
                args=str(args))

    class XMLRPCHandler(handlers.XMLRPCHandler):
        def process_request(self):
            encoding = self.request.encoding or 'utf-8'
            data = self.request.body.decode(encoding)
            params, method_name = self.loads(data)

            log_call(self.request, method_name, params)
            return super().process_request()

    class JSONRPCHandler(handlers.JSONRPCHandler):
        def process_single_request(self, payload):
            method_name = payload.get('method', None)
            params = payload.get('params')

            log_call(self.request, method_name, params)
            return super().process_single_request(payload)

NOTE: You will have to execute ./manage.py makemigrations api_logging to create the initial migration for Django. This could be easier if you place the above directory into existing Django application or craft the migration file by hand!

The last thing you want to do is create a local_settings.py file which will override Kiwi TCMS defaults:

    # local_settings.py
    from django.conf import settings

    settings.INSTALLED_APPS += [
        'api_logging',
    ]

    MODERNRPC_HANDLERS = ['api_logging.handlers.XMLRPCHandler',
                          'api_logging.handlers.JSONRPCHandler']

Then place everything in Dockerfile like so:

    FROM kiwitcms/kiwi

    COPY ./api_logging/ /venv/lib64/python3.6/site-packages/api_logging/
    COPY local_settings.py /venv/lib64/python3.6/site-packages/tcms/settings/

Kiwi TCMS will import your local_settings.py and enable the logging application. Now build your customized Docker image and use it for deployment!

Happy testing!

This is the first publication in our customization series. It will show you how to override any template used by Kiwi TCMS. As an example we will override the email template that is used when registering new account. By default the email text looks like this:

    Welcome {{ user }},
    thank you for signing up for an {{ site_domain }} account!

    To activate your account, click this link:
    {{ confirm_url }}

https://public.tenant.kiwitcms.org runs a custom Docker image based on kiwitcms/kiwi. For this image the confirmation email looks like this

    Welcome {{ user }},
    thank you for signing up for our Kiwi TCMS demo!

    To activate your account, click this link:
    {{ confirm_url }}

    GDPR no longer allows us to automatically subscribe you to
    our newsletter. If you wish to keep in touch and receive emails
    with news and updates around Kiwi TCMS please subscribe at:
    https://kiwitcms.us17.list-manage.com/subscribe/post?u=9b57a21155a3b7c655ae8f922&id=c970a37581

    --
    Happy testing!
    The Kiwi TCMS team
    http://kiwitcms.org

The file that we want to override is tcms/templates/email/confirm_registration.txt.

Create a local directory (git repository) which will hold customization configuration and create a file named templates.d/email/confirm_registration.txt with your text!

Next you want to make this file available inside your docker image so your Dockerfile should look like this:

    FROM kiwitcms/kiwi

    COPY ./templates.d/ /venv/lib64/python3.6/site-packages/tcms/overridden_templates/
    COPY local_settings.py /venv/lib64/python3.6/site-packages/tcms/settings/

where local_settings.py contains

    import os
    from django.conf import settings

    settings.TEMPLATES[0]['DIRS'].insert(0, os.path.join(settings.TCMS_ROOT_PATH, 'overridden_templates'))

The following code states instruct Django to look into overridden_templates first and use any templates it finds there; also make my files available in that specific location inside the docker image.

This approach can be used for all templates that you wish to override. Take into account that file names must match (Django searches templates by directory path). Now build your customized Docker image and use that for deployment!

Happy testing!

When you start Kiwi TCMS by running docker-compose up (see here) it will automatically create 2 volumes: kiwi_db_data and kiwi_uploads. This blog post will outline how to backup these docker volumes.

Note: in the instructions below kiwi_db is the container name and kiwi is the database name used inside the docker-compose.yml file!

MariaDB/MySQL database

To export all contents from the docker container execute the command:

docker exec -i kiwi_db mysqldump --user <username> --password <password> kiwi > backup.sql

This will create a file named backup.sql in the current directory, outside of the running container!

You can restore the database contents by using the following command:

cat backup.sql | docker exec kiwi_db mysql --user <username> --password <password> -v kiwi

Notes:

  1. Depending on your scenario you may want to remove the existing volume (docker-compose down && docker volume rm kiwi_db_data) before restoring the database!

Postgres database

To export all contents from the docker container execute the command:

docker exec -i kiwi_db pg_dump -U <username> --dbname=kiwi -F c > backup.bak

This will create a file named backup.bak in the current directory, outside of the running container. This is a PostgreSQL custom database dump format which contains all data and schema definitions. That is a binary file which can be read with the pg_restore command!

To drop and restore the entire database execute:

docker exec -i kiwi_db psql -c "DROP DATABASE IF EXISTS kiwi;"
cat backup.bak | docker exec -i kiwi_db pg_restore -U <username> --dbname=template1 -vcC

Multi-tenant database

The kiwitcms-tenant add-on and/or Kiwi TCMS Enterprise work only on Postgres! Each tenant (aka name-space) uses a separate database schema. The first schema name is public.

The backup and restore instructions shown above operate on all tenants together! If you want to [drop and] restore an individual tenant then use the commands:

docker exec -it kiwi_db psql --dbname=kiwi

kiwi=> DROP SCHEMA $tenant_name CASCADE;
....
DROP SCHEMA
kiwi=> CREATE SCHEMA $tenant_name;
CREATE SCHEMA
kiwi=>Ctrl+D

cat backup.bak | docker exec -i kiwi_db pg_restore -U <username> --dbname=kiwi -v --schema $tenant_name

Backing up file uploads

Uploaded files can easily be backed up with:

docker exec -i kiwi_web /bin/tar -cP /Kiwi/uploads > uploads.tar

and then restored with:

cat uploads.tar | docker exec -i kiwi_web /bin/tar -x

You may also try the rsync command but be aware that it is not installed by default!

Note: the same approach may be used to backup /var/lib/mysql/ or /var/lib/pgsql/data from the kiwi_db container.

Backing up multi-tenant uploads

By default multi-tenant file uploads are stored under /Kiwi/uploads/tenant/$tenant_name. You can archive all contents with the same procedure above. If you wish to restore files per tenant you will have to upload the $tenant_name directory into the docker volume.

Alternatives

By default both docker volumes created for Kiwi TCMS use the local driver and are available under /var/lib/docker/volumes/<volume_name> on the host running your containers. You can try backing them up from there as well.

Another alternative is to use the docker-lvm-plugin and create these volumes as LVM2 block devices. Then use lvcreate -s command to create a snapshot volume. For more information see chapter 2.3.5. Snapshot Volumes from the LVM Administrator Guide for Red Hat Enterprise Linux 7.

Happy testing!

Page 1 / 2

Older Posts