Test execution vision/wish/plan/approach
Posted: Tue Mar 30, 2010 4:32 pm
Hi,
there has been some discussion around one of my feature requests, so I though I should write how I vision test execution from TestLink, and integration with our test runner.
I will write a "test execution server" which listens to XML RPC from TestLink, and starts tests when told to. It will listen to "ExecuteTest" and "ExecutePlan" (when bug#2056 or similar is fixed). This message need to contain all the information I need to run the tests (see below, bug#3338). Why "ExecutePlan"? Because I want to run many tests in parallel, I might have to queue up some tests, and each test can run for hours or days, so I do not want TL web GUI to wait for my response.
When I have a plan or test, I will get test suites and test cases in the test plan (bug#2122 - which I have already contributed, bug#33144 - which I have already contributed, and maybe more of bug#2408 for navigation). When looking for active (bug#3335, bug#3148) test cases, I will select test cases that are marked as automated and has some custom field that I use to specify which test file to run.
Unless bug#2056 (or similar execute plan/test case) is the way to go, I can use TL API to get projects, test plans, builds etc., and create a GUI that lets the user decide and kick off the test from outside TL.
After I have used the TL API to get all the tests, I need to know where to run the tests. with bug#3338 (already contributed), I have the platform_id, but I would also need to know which machines to run the test on. Bug#3341 and bug 3348 should hopefully let me connect the dots. Some of the tests are special (and will have a custom field to specify so), e.g. they need to run on machines with huge hard disk drives, need special file systems or similar, to mark such machines, Ill use custom fields on machines (bug#3339).
(Platforms and machines will be imported with a script that "crawls" our lab, and use bug#3339 and bug#3340 to add extra information about the machines that we have.)
I now have a list of machines and a list of tests, the last thing I need is to know which software to test. Bug#3338 provides a build ID. With custom fields on builds (no bug), or better "build installation" (which would have a reference to a build and a platform and contain a host name and a path) to say where some "binaries" are places, I could get the software under testing as well.
Now my test runner will start running tests. I'll add a new test execution status "started" which I set when I start a test run, and then (some hours/days later) I'll update the test execution with the final outcome. To the report I'd like to attache which machines the test ran on (with custom fields and bug#3339), and set a custom field for where the logs are, and maybe attach a file with a summary of the test run as an attachment (bug#1890).
That was the plan so far. After this there will probably be reports (e.g. "show running tests", "show machines in use", ...) that are needed, and I can envision scalability problems in the views because we get many more test executions when done automatically than when done manually.
Does people think this is a good approach? Have better suggestions? Should I forget it, or should I fire up my editor?
there has been some discussion around one of my feature requests, so I though I should write how I vision test execution from TestLink, and integration with our test runner.
I will write a "test execution server" which listens to XML RPC from TestLink, and starts tests when told to. It will listen to "ExecuteTest" and "ExecutePlan" (when bug#2056 or similar is fixed). This message need to contain all the information I need to run the tests (see below, bug#3338). Why "ExecutePlan"? Because I want to run many tests in parallel, I might have to queue up some tests, and each test can run for hours or days, so I do not want TL web GUI to wait for my response.
When I have a plan or test, I will get test suites and test cases in the test plan (bug#2122 - which I have already contributed, bug#33144 - which I have already contributed, and maybe more of bug#2408 for navigation). When looking for active (bug#3335, bug#3148) test cases, I will select test cases that are marked as automated and has some custom field that I use to specify which test file to run.
Unless bug#2056 (or similar execute plan/test case) is the way to go, I can use TL API to get projects, test plans, builds etc., and create a GUI that lets the user decide and kick off the test from outside TL.
After I have used the TL API to get all the tests, I need to know where to run the tests. with bug#3338 (already contributed), I have the platform_id, but I would also need to know which machines to run the test on. Bug#3341 and bug 3348 should hopefully let me connect the dots. Some of the tests are special (and will have a custom field to specify so), e.g. they need to run on machines with huge hard disk drives, need special file systems or similar, to mark such machines, Ill use custom fields on machines (bug#3339).
(Platforms and machines will be imported with a script that "crawls" our lab, and use bug#3339 and bug#3340 to add extra information about the machines that we have.)
I now have a list of machines and a list of tests, the last thing I need is to know which software to test. Bug#3338 provides a build ID. With custom fields on builds (no bug), or better "build installation" (which would have a reference to a build and a platform and contain a host name and a path) to say where some "binaries" are places, I could get the software under testing as well.
Now my test runner will start running tests. I'll add a new test execution status "started" which I set when I start a test run, and then (some hours/days later) I'll update the test execution with the final outcome. To the report I'd like to attache which machines the test ran on (with custom fields and bug#3339), and set a custom field for where the logs are, and maybe attach a file with a summary of the test run as an attachment (bug#1890).
That was the plan so far. After this there will probably be reports (e.g. "show running tests", "show machines in use", ...) that are needed, and I can envision scalability problems in the views because we get many more test executions when done automatically than when done manually.
Does people think this is a good approach? Have better suggestions? Should I forget it, or should I fire up my editor?