Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improvements to integration tests #33203

Open
TheFox0x7 opened this issue Jan 10, 2025 · 2 comments
Open

Improvements to integration tests #33203

TheFox0x7 opened this issue Jan 10, 2025 · 2 comments
Labels
type/proposal The new feature has not been accepted yet but needs to be discussed first.

Comments

@TheFox0x7
Copy link
Contributor

Feature Description

While writing integration test for pull request template I had some difficulties with getting it to work initially. I observed the following issues:

  1. There's not a lot of documentation on how they work, where are some details about them (does userN with repo foobar exist?), when to add one and so on.
  2. Most helper functions are undocumented so for new people (like myself) it's mostly copying and adjusting old tests to fit their use case.
  3. Tests themselves lack description, so there's no way to verify if they test what they should (which is not common but happens - not observed here though)

I have some idea of how tests could look like but I'd like to open a discussion about it first. Some of my ideas:

  1. Add scenario description to a test, it helps later on when refactoring, reviewing, etc. and it is especially important when the test is broken - there's less guessing as to what the author wanted to check for.
  2. It would be better to have test be self-contained. Currently I have no way to easily check why user1 is used for this test. IMO each test should setup it's own preconditions1 or at minimum explain what they are.
  3. Having a tag system for tests would also be nice - it helps to run only related tests for a bit faster local iteration time.

Blackbox tests would be nice too but those don't provide measurable coverage.

Screenshots

No response

Footnotes

  1. That isn't to say that loading the database isn't useful - I'm sure it is to trigger something very specific, I haven't delved that deep to find an example for it. On the other hand I think that having userCreate(opts) or similar which can use web/api to do it and mimics what the end user can actually do is more friendly and will help to have more approachable tests.

@TheFox0x7 TheFox0x7 added the type/proposal The new feature has not been accepted yet but needs to be discussed first. label Jan 10, 2025
@wxiaoguang
Copy link
Contributor

wxiaoguang commented Jan 10, 2025

Agree to improve the tests.

  • Add scenario description to a test, it helps later on when refactoring, reviewing, etc. and it is especially important when the test is broken - there's less guessing as to what the author wanted to check for.

Yep, we should encourage developers to write more informative comments, not only in tests.

  • It would be better to have test be self-contained. Currently I have no way to easily check why user1 is used for this test. IMO each test should setup it's own preconditions1 or at minimum explain what they are.

Agree, some existing fixtures are good enough for most cases. For special cases, we should create users or repositories during the tests. (I have stopped some PRs from adding more rows in "user.yml" or more files like "new-test.git", maybe it won't make everyone happy but I think it is still right to keep the tests clear).

Actually, for existing fixtures, there used to be some comments like this (screenshot):

image

  • Having a tag system for tests would also be nice - it helps to run only related tests for a bit faster local iteration time.

Not sure whether it really helps (or what's your proposal to do so).

In most cases, I just click the "Run" icon in my IDE and only run the test I am working on (screenshot):

image

There are also some Makefile commands (but I seldom use them):
	@echo " - test[\#TestSpecificName]    	    run unit test"
	@echo " - test-sqlite[\#TestSpecificName]  run integration test for sqlite"

@TheFox0x7
Copy link
Contributor Author

I don't remember if I tried it to be honest - I think they failed because some env vars are set when ran with make. But with a "tag system" I means something more in the direction of "I've modified something with that touched pull request code. I want to start all tests which are related to pull requests". Then again this is more a QoL thing than something major.

As for the fixtures - I meant that currently instead of making a new user and configuring them for the specific test, it's more natural to look for the preloaded user, which causes a loop of sorts - Pick a user, pick a repo, test fails because repo is private and that's the only repo user has, pick a new set and so on.
I'm not a big fan of fixtures, but then again, making a new user to check if blocking works only to unblock and delete both users isn't great either...

Basic test example that I'm used to
// Runs things that all test from the file need
func SetupSuite(){
// We want an user with repository
    user:=   admin.CreateUser()
        repo:=user.CreateRepository()
}
func TeardownSuite(){
    user.DeleteRepository(repo)
    admin.DeleteUser(user)
}

// Scenario
func UserCanMakeRepoPrivate(){
    teardown:=func(){
        repo.SetPrivate(false)
    }
    repo.SetPrivate(true)
    assert.True(repo.Private)
}

At the very least, documenting how to set instance with fixtures up and the relations of users/repos in fixtures would be good to start with.

I'll try looking into how integrations tests are done when I have some time.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type/proposal The new feature has not been accepted yet but needs to be discussed first.
Projects
None yet
Development

No branches or pull requests

2 participants