Skip to content

Add tests or other quality measures to the codebase? #6403

@jens1o

Description

@jens1o

Hi everyone,

I'm opening this as a discussion/question, unfortunately I can't assign an appropriate label myself.

I regularly follow the changelogs, and I'm curious about the quality assurance process for the software, particularly the FOSS components. Please know that I'm absolutely not implying anyone is doing a bad job, nor am I trying to be rude. My intention is purely to question the overall system and processes in place, and I hope it's received in that spirit. :)

For example, in the changelog of the twelfth (!) patch release, I noticed a fix for "Due to a bug in the new upload pipeline, it was not possible to upload files over 2 GB across browsers." This type of issue can sometimes be caught with pre-release testing which I would have expected for a new feature that was introduced many versions ago.

Maybe things like 6f1faee could also get caught by using code linting that would see that there's a dead boolean expression instead of an assignment?

I'm accustomed to working with software that employs various levels of automated tests, such as unit, integration, and end-to-end tests. Are there any long-term plans to introduce or expand testing within the codebase? I understand that the current architecture might present challenges (e.g., the use of singletons) and it is certainly not an easy task, but perhaps there are opportunities for some quick wins in this area or some kind of vision for a new major version.

My aim here is purely to understand the current approach and potential future directions for ensuring software stability and reliability. Maybe you have recommendations for plugin maintainers?

Thanks for your insights!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    Status

    Done

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions