Improvements to integration tests #13956

Open
opened 2025-11-02 10:58:19 -06:00 by GiteaMirror · 2 comments
Owner

Originally created by @TheFox0x7 on GitHub (Jan 10, 2025).

Feature Description

While writing integration test for pull request template I had some difficulties with getting it to work initially. I observed the following issues:

  1. There's not a lot of documentation on how they work, where are some details about them (does userN with repo foobar exist?), when to add one and so on.
  2. Most helper functions are undocumented so for new people (like myself) it's mostly copying and adjusting old tests to fit their use case.
  3. Tests themselves lack description, so there's no way to verify if they test what they should (which is not common but happens - not observed here though)

I have some idea of how tests could look like but I'd like to open a discussion about it first. Some of my ideas:

  1. Add scenario description to a test, it helps later on when refactoring, reviewing, etc. and it is especially important when the test is broken - there's less guessing as to what the author wanted to check for.
  2. It would be better to have test be self-contained. Currently I have no way to easily check why user1 is used for this test. IMO each test should setup it's own preconditions1 or at minimum explain what they are.
  3. Having a tag system for tests would also be nice - it helps to run only related tests for a bit faster local iteration time.

Blackbox tests would be nice too but those don't provide measurable coverage.

Screenshots

No response


  1. That isn't to say that loading the database isn't useful - I'm sure it is to trigger something very specific, I haven't delved that deep to find an example for it. On the other hand I think that having userCreate(opts) or similar which can use web/api to do it and mimics what the end user can actually do is more friendly and will help to have more approachable tests. ↩︎

Originally created by @TheFox0x7 on GitHub (Jan 10, 2025). ### Feature Description While writing integration test for pull request template I had some difficulties with getting it to work initially. I observed the following issues: 1. There's not a lot of documentation on how they work, where are some details about them (does userN with repo foobar exist?), when to add one and so on. 2. Most helper functions are undocumented so for new people (like myself) it's mostly copying and adjusting old tests to fit their use case. 3. Tests themselves lack description, so there's no way to verify if they test what they should (which is not common but happens - not observed here though) --- I have some idea of how tests could look like but I'd like to open a discussion about it first. Some of my ideas: 1. Add scenario description to a test, it helps later on when refactoring, reviewing, etc. and it is especially important when the test is broken - there's less guessing as to what the author wanted to check for. 2. It would be better to have test be self-contained. Currently I have no way to easily check why `user1` is used for this test. IMO each test should setup it's own preconditions[^1] or at minimum explain what they are. 3. Having a tag system for tests would also be nice - it helps to run only related tests for a bit faster local iteration time. Blackbox tests would be nice too but those don't provide measurable coverage. [^1]: That isn't to say that loading the database isn't useful - I'm sure it is to trigger something very specific, I haven't delved that deep to find an example for it. On the other hand I think that having `userCreate(opts)` or similar which can use web/api to do it and mimics what the end user can actually do is more friendly and will help to have more approachable tests. ### Screenshots _No response_
GiteaMirror added the type/proposal label 2025-11-02 10:58:19 -06:00
Author
Owner

@wxiaoguang commented on GitHub (Jan 10, 2025):

Agree to improve the tests.

  • Add scenario description to a test, it helps later on when refactoring, reviewing, etc. and it is especially important when the test is broken - there's less guessing as to what the author wanted to check for.

Yep, we should encourage developers to write more informative comments, not only in tests.

  • It would be better to have test be self-contained. Currently I have no way to easily check why user1 is used for this test. IMO each test should setup it's own preconditions1 or at minimum explain what they are.

Agree, some existing fixtures are good enough for most cases. For special cases, we should create users or repositories during the tests. (I have stopped some PRs from adding more rows in "user.yml" or more files like "new-test.git", maybe it won't make everyone happy but I think it is still right to keep the tests clear).

Actually, for existing fixtures, there used to be some comments like this (screenshot):

image

  • Having a tag system for tests would also be nice - it helps to run only related tests for a bit faster local iteration time.

Not sure whether it really helps (or what's your proposal to do so).

In most cases, I just click the "Run" icon in my IDE and only run the test I am working on (screenshot):

image

There are also some Makefile commands (but I seldom use them):
	@echo " - test[\#TestSpecificName]    	    run unit test"
	@echo " - test-sqlite[\#TestSpecificName]  run integration test for sqlite"
@wxiaoguang commented on GitHub (Jan 10, 2025): Agree to improve the tests. > * Add scenario description to a test, it helps later on when refactoring, reviewing, etc. and it is especially important when the test is broken - there's less guessing as to what the author wanted to check for. Yep, we should encourage developers to write more informative comments, not only in tests. > * It would be better to have test be self-contained. Currently I have no way to easily check why `user1` is used for this test. IMO each test should setup it's own preconditions[1](#user-content-fn-1-20a66ae3e50f825d7a1708ff3ff6352a) or at minimum explain what they are. Agree, some existing fixtures are good enough for most cases. For special cases, we should create users or repositories during the tests. (I have stopped some PRs from adding more rows in "user.yml" or more files like "new-test.git", maybe it won't make everyone happy but I think it is still right to keep the tests clear). <details> <summary> Actually, for existing fixtures, there used to be some comments like this (screenshot): </summary> ![image](https://github.com/user-attachments/assets/ef0c889c-b7ea-43cf-9fd5-0f37e260faf8) </details> > * Having a tag system for tests would also be nice - it helps to run only related tests for a bit faster local iteration time. Not sure whether it really helps (or what's your proposal to do so). <details> <summary> In most cases, I just click the "Run" icon in my IDE and only run the test I am working on (screenshot): </summary> ![image](https://github.com/user-attachments/assets/75ffbb95-5379-469d-a1d1-44ce9b56b8f1) </details> <details> <summary> There are also some Makefile commands (but I seldom use them): </summary> ``` @echo " - test[\#TestSpecificName] run unit test" @echo " - test-sqlite[\#TestSpecificName] run integration test for sqlite" ``` </details>
Author
Owner

@TheFox0x7 commented on GitHub (Jan 10, 2025):

I don't remember if I tried it to be honest - I think they failed because some env vars are set when ran with make. But with a "tag system" I means something more in the direction of "I've modified something with that touched pull request code. I want to start all tests which are related to pull requests". Then again this is more a QoL thing than something major.

As for the fixtures - I meant that currently instead of making a new user and configuring them for the specific test, it's more natural to look for the preloaded user, which causes a loop of sorts - Pick a user, pick a repo, test fails because repo is private and that's the only repo user has, pick a new set and so on.
I'm not a big fan of fixtures, but then again, making a new user to check if blocking works only to unblock and delete both users isn't great either...

Basic test example that I'm used to
// Runs things that all test from the file need
func SetupSuite(){
// We want an user with repository
    user:=   admin.CreateUser()
        repo:=user.CreateRepository()
}
func TeardownSuite(){
    user.DeleteRepository(repo)
    admin.DeleteUser(user)
}

// Scenario
func UserCanMakeRepoPrivate(){
    teardown:=func(){
        repo.SetPrivate(false)
    }
    repo.SetPrivate(true)
    assert.True(repo.Private)
}

At the very least, documenting how to set instance with fixtures up and the relations of users/repos in fixtures would be good to start with.

I'll try looking into how integrations tests are done when I have some time.

@TheFox0x7 commented on GitHub (Jan 10, 2025): I don't remember if I tried it to be honest - I think they failed because some env vars are set when ran with make. But with a "tag system" I means something more in the direction of "I've modified something with that touched pull request code. I want to start all tests which are related to pull requests". Then again this is more a QoL thing than something major. As for the fixtures - I meant that currently instead of making a new user and configuring them for the specific test, it's more natural to look for the preloaded user, which causes a loop of sorts - Pick a user, pick a repo, test fails because repo is private and that's the only repo user has, pick a new set and so on. I'm not a big fan of fixtures, but then again, making a new user to check if blocking works only to unblock and delete both users isn't great either... <details> <summary>Basic test example that I'm used to</summary> ```go // Runs things that all test from the file need func SetupSuite(){ // We want an user with repository user:= admin.CreateUser() repo:=user.CreateRepository() } func TeardownSuite(){ user.DeleteRepository(repo) admin.DeleteUser(user) } // Scenario func UserCanMakeRepoPrivate(){ teardown:=func(){ repo.SetPrivate(false) } repo.SetPrivate(true) assert.True(repo.Private) } ``` </details> At the very least, documenting how to set instance with fixtures up and the relations of users/repos in fixtures would be good to start with. I'll try looking into how integrations tests are done when I have some time.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/gitea#13956