You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Funding.json is some kind of industry standard on how to beg for
funding. Added it. Fixes#608 aka 95b9f5e. The
funding file was partially generated with Claude Code - I asked the AI
to help reading the specs and setting up the json structure
accordingly.
The async support has proven a lot more fragile than what I had hoped for,
so it's appropriate to add some warnings in the async documentation.
Git commit messages should now follow the industry standard.
CHANGELOG prepared for v3.2.0 release
The commit is predominantly human-written, with the following exceptions:
* The code review document was AI-generated, human-updated
* Changelog was AI-maintained, but most of it has been rewritten by hand
prompt: Make a code review of all changes since v3.0.0
followup-prompt: write the review to a file under docs/design/
followup-prompt: The code review was not committed. Commit, then work on the code duplication in response.py
prompt: the ChangeLog should be maintained
Assisted-By: Claude Sonnet 4.6
Copy file name to clipboardExpand all lines: AI-POLICY.md
+18-18Lines changed: 18 additions & 18 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,9 +4,6 @@
4
4
5
5
The most important rule: be honest and inform about it!
6
6
7
-
Keep a log of the prompts used - prompts should preferably be included in the
8
-
git commits.
9
-
10
7
Tools should generally be used for improving the quality of the
11
8
project, not for rapidly adding new features.
12
9
@@ -20,7 +17,7 @@ large for being included in the commit message, etc.
20
17
Keep it clear what is human-written vs what is AI-written. In a
21
18
feature-branch, separate AI-commits and human-commits is preferable.
22
19
Those should most often be squashed together before including it in
23
-
the main branch, with a notice in the commit message on what parts o
20
+
the main branch, with a notice in the commit message on what parts of
24
21
the commit is AI-generated.
25
22
26
23
## Transparency matters
@@ -85,6 +82,11 @@ rewritten.
85
82
adding value to the project. You should at least do a quick QA on
86
83
the AI-answer and acknowledge that it was generated by the AI.
87
84
85
+
* Most AI policies warns about potential copyright infringements. I
86
+
can hardly think it's any risk wrg of contributions to the Python
87
+
CalDAV library. In particular, if your changeset consists of lots
88
+
of minor changes to existing code, then it's nothing to worry about.
89
+
88
90
* The Contributors Guidelines aren't strongly enforced on this project
89
91
as of 2026-02, and I can hardly see cases where the AI would break
90
92
the Code of Conduct, but at the end of the day, it's **YOUR**
@@ -94,7 +96,7 @@ rewritten.
94
96
95
97
The maintainer started playing with Claude Code in the end of 2025 - and [blogged about it](https://www.redpill-linpro.com/techblog/2026/03/20/from-luddite-to-vibe-coder.html)
96
98
97
-
Releases 2.2.6 - 3.2.0 has been heavily assisted by Claude - which is pretty obvious when looking into the commit messages. My experiences has been mixed - sometimes it seems to be doing a better and faster job than me, other times it seems to be making a mess a lot faster than what I can do it. Despite (or because of?) using Claude extensively, I spent much more time on it than estimated.
99
+
Releases 2.2.6 - 3.2.0 has been heavily assisted by Claude - which is pretty obvious when looking into the commit messages. My experiences has been mixed - sometimes it seems to be doing a better and faster job than me, other times it seems to be making a mess a lot faster than what I can do it. Despiteof (or because of?) using Claude extensively, I spent much more time on the 3.0.0-release than estimated.
98
100
99
101
Lots of time and efforts have been spent on doing QA on the changes, fixing up things and/or asking Claude to do a better job. The surge of issues reported after the 3.0-release is probably unrelated to the AI usage - it's a result of trying to shoehorn both async and API changes into it without breaking backward compatbility and without duplicating too much code. The CHANGELOG.md entry for 3.0 explicitly declared a caveat: "there are massive code changes in version 3.0, so if you're using the Python CalDAV client library in some sharp production environment, I would recommend to wait for two months before upgrading".
100
102
@@ -104,19 +106,17 @@ Generated changes and human-made changes are often mixed up. I prefer "logical"
104
106
105
107
## Future plans of GenAI-usage
106
108
107
-
Post-3.2.0 and until further notice I will try to go more back to the old ways for doing the "core development tasks" - new features and complex refactoring. If nothing else, it's important for maintaining my brain cells, coding skills and making sure all the changes sticks to my memory. The new policy is that GenAI-tools should be used mainly for improving quality, not speeding up the development.
109
+
Post-3.2.0 and until further notice I will try to go back to the old ways for doing the "core development tasks" - new features and complex refactoring. If nothing else, it's important for maintaining my brain cells, coding skills and making sure all the changes sticks to my memory. The new policy is that GenAI-tools should be used mainly for improving quality, not speeding up the development.
108
110
109
-
I still intend to use GenAI heavily for certain tasks, like:
111
+
I still intend to use GenAI heavily for certain tasks, particularly anything that is either "mundane and tedious" or unrelated to the "the working end" of the library. Examples:
110
112
111
-
* Minor bugfixes - with test code. The bugfix itself may often be a simple one-line change, but debugging and writing up the tests is tedious work.
112
-
* Maintaining the integration test framework. It's hard work, even when using Claude. Thanks to Claude I've now been able to put up an extensive "battery" of test servers that I'm checking regularly towards. This is something I've started on several times since 2013 but except for the two integrated python servers I never managed to get any lasting solutions. It's very useful to be able to easily test the library towards a wide range of servers - the majority of the bug reports are compatibility issues. The more servers I have for testing every release, the less troubles will be discovered downstream.
113
-
* Other CI-related frameworks and "boiler plate" for things like automated testing of code embedded in the documentation, QA on the commit messages before I push my git commits out from my laptop, etc. It increases quality, although being quite outside the "core business" of the CalDAV library. Doing it manually (and reading through all the documentation out there) would have stolen lots of valuable time that could have been used for coding.
114
-
* Writing up test code. I've always thought that "test driven development" is a good idea (write test code first, then the logic), but it's quite often both tedious and difficult. Claude can make them really fast. It still needs some QA, care should be taken to ensure it's testing the right thing.
115
-
* Code reviews. The more "eyes" looking into the software, the better - it seems Claude is equally good at spotting the problems and mistakes in my code as I'm on spotting the problems and mistakes in the code Claude generates.
116
-
* Debugging. It's easy to get stuck and spend tons of time on debugging - sometimes (but not always) Claude can find them easily.
117
-
* Various mundane and tedious work (i.e. "I left a TODO-note in the code over there, could you have look into it and eliminate it?").
118
-
* Development of the companion caldav-server-checker tool - writing up checks to discover various server issues may be really tedious and time-consuming, and (most of the time) easy for Claude to get right. The alternative to using GenAI would probably be to have half as many checks. I find those checks very useful.
113
+
* Code reviews. I think there should be a policy that all changesets and releases should go through AI-driven code review. By itself it sounds like a good idea, though one should be aware of the risk is that this comes *instead* of human reviews rather than as an addition.
114
+
* Writing up test code. I do believe "test driven development" is a good idea (write test code first, then the logic), but writing tests may be both tedious and difficult. Claude can make them really fast, though it still needs some QA, care should be taken to ensure it's testing the right thing.
115
+
* Debugging. It's easy to get stuck and spend tons of time on debugging - sometimes (but not always) Claude can find them easily. (Best approach is sometimes to do manual debugging in parallell with AI-driven debugging ... sometimes I "win", othertimes the AI "wins").
116
+
* Minor bugfixes ... the bugfix itself may be a one line changeset, but tests and debugging takes time.
117
+
* Maintaining the integration test framework. I've consistently failed setting up and maintaining a "battery" of caldav servers from 2013 to 2025, thanks to Claude we now have it in place. It's important, a majority of issues reported are about compatibility problems, the more servers I have for testing every release, the less troubles will be discovered downstream.
118
+
* Setting up CI-related automated QA tests, pipelines etc
119
+
* The companion caldav-server-checker tool is quite suitable for GenAI-work - it's a bit like test code, writing up the checks to discover various server issues is rather tedious and time-consuming. Without AI-help I would probably have covered less than half of the "features" that are now tested for.
119
120
* Investigations of different architectural choices - like with the async work I had claude develop different design approaches and chose the one that I felt most comfortable with (though I'm still not sure that I did the right choice).
120
-
* Reading RFCs and quickly give a pointer to the relevant sections, or verifying that the code is according to the standards or not.
121
-
122
-
I will do some research on how to log prompts and chat.
121
+
* Reading RFCs and quickly give a pointer to the relevant sections, or verifying that the code is according to the standards or not (but care should be taken - I've seen Claude hallucinating completely wrong RFC references).
122
+
* Various other mundane and tedious work (i.e. "I left a TODO-note in the code over there, could you have look into it and eliminate it?").
0 commit comments