Re: running tests through tooling-api with detailed feedback/progress info ?
On Wed, Jun 18, 2014 at 7:36 AM, Magnus Rundberget <[hidden email]> wrote:
I'm not sure if this question belongs in the forum or here, but I'll start here since I have a hunch that what I'm asking wont be trivial and might require extending the tooling api.
This is a good place IMO.
A little context first:
Use Case: Running test from gradle project and show tests inline in editor
1) I open a test file (spock, junit).
- Light Table recognizes that the file is a (gradle) test and visually indicates this to the user
- I'm guessing this should be possible through a custom model by getting hold of a list of test source directories and associated include/exclude patterns
I agree. From task that has a type Test you can get classes dir and map it to a sourceSet to find where sources are living. Include/exclude is in Test task too.
2) I invoke a Light Table command to run the test
- Light table through the plugin invokes a gradle task (custom ?) to execute the test in question
The model should return a map like [ 'test': testSourcesAndOptionalData, 'integTest': integTestSourcesAndOptionalData] so we will know what task to execute. Note that there can be overlap and one test can be executed using different tasks (think of testWithInMemoryDB/testWithMysql).
- Gradle reports generic progress through the tooling api (typically if it has to resolve dependencies, run other tasks first etc)
Yup, this remains though we may want to tweak what is send to make it more useful.
- Gradle reports test progress (akin to the TestListener), through the tooling api (somehow)
It is quite possible that we can use something very similar to TestListener. I will check what do existing Java IDEs expect when hooking up custom test runners. I suppose that if Gradle sends information that is already broadcasted to TestListener instances it should be sufficient.
- Light Table displays results inline in the editor for the test
-- Showing ok, fail, error for each test. Ideally showing errors at correct line number
Errors/failures should send stacktraces to give you chance to annotate errors in sources or enable navigation. Also we want to be able to initiate 're-run test' action.
a) What would be possible to achieve with the current tooling api ?
Using IdeaProject model gives you a good approximation where (unit-)test can be found. Probably covers about 3/4 cases.
b) How would I go about exposing something like a test listener through the current tooling api if at all possible ?