-
-
Notifications
You must be signed in to change notification settings - Fork 32.7k
Extended the GitHub pull request template to require AI assistance disclosure. #19594
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Looks good to me. I think calling out copilot like you are doing is good too since it's the built-in default for most GH things. |
I think this AI Assistance Disclosure is a great idea.
Obvioulsy all contributions should adhere to django's contribution and security policies, is the inclusion here as a checkbox because we expect a higher volume of lower quality submissions that don't follow the contribution guide with AI? Does the checkbox do enough for this. Could we link to the relevant policies in the comment to help new contributors, I'm aware of the code contribution guides. https://docs.djangoproject.com/en/dev/internals/contributing/writing-code |
Yeah actually I was just thinking: should that sit with the other checkboxes? 🤔 |
Why are we doing this? Can we get links to any other discussions leading to this... |
I'm definitely open to tightening up the language and/or the format, though I'm also trying to keep the template lightweight for actual human contributors. The checkbox is aiming at being a low-friction nudge. Would you have any suggestion? I'm very open to ideas!
Yes, absolutely, great point! (@RealOrangeOne is also suggesting this, so I'll push an update soon.)
You're correct: by "security policies" I primarily mean the security reporting guidelines and the expectations around responsible disclosure, not anything specific to code contributions per se. I can see how this could be a bit confusing and I'm happy to rephrase. I felt it was worth including it to emphasize that contributors (especially AI-assisted ones) should be aware of those boundaries and not propose PRs (which are inherently public) for (potential) security vulnerabilities. |
cc017fa
to
6b18ee2
Compare
@adamchainz This change is meant to address the increase in lower-quality AI-generated submissions that the Fellows have been dealing with on a day-to-day basis. Do you think this level of change warrants a forum post or broader discussion, given its impact on their workflow? If you have specific concerns or suggestions, I’d love to hear them. Your earlier response came across as a bit dismissive, so I wanted to check if there's a particular reason you're hesitant about the change. |
Just FYI I find Adam's generally a respectful person but is just thorough and likes to not see things creep too far... I was also going to question the usefulness of it as it seems like rubber stamping but decided to wait and see where it was going. 👍 I watch all PRs as well and I've noticed the odd spam PR. I didn't realise it was that annoying but if it makes Natalia's and Sarah's life easier then sure 👍 |
Sorry for being curt earlier! I meant nothing by it; I just wanted more context. I am not confident that more steps will stop slop contributions being made, given how often folks delete or ignore the template. Looking back at recent unmerged PRs, none of the first ~10 spam ones used the template. But if the fellows want to give it a try, sure, we can try. I will need to fill it in every time though, given I have copilot active... |
I do think we need something given what I've heard from the Fellows about the growing number of un- or partially-reviewed AI generated PRs, especially with classics such as modules and functions that don't exist. I feel it's a relatively normal position to say "if you don't fully review your code, we are not going to either". As such, I think we should add similar language to what we already have in the security policies about AI-generated reports to the general PR/contributions section, including the fact that obviously unreviewed/incorrect machine-generated PRs get closed without response and repeated acts will earn you a ban, and then also add some checklist items. I think we can add less, though - I'd say just two checkboxes of either "I used no AI generation tools to create this pull request" or "I used AI generation tools to create the request, have disclosed them, and have fully reviewed their output" would be sufficient (two where you must pick exactly one is nice, as that way if someone just ticks every box you know they've not read it). |
Good ideal, I will push this as a separated commit.
Makes sense, will tweak. Thank you! |
6b18ee2
to
cdb8af6
Compare
Pushed updates as requested by @andrewgodwin. Could use an approval from a steering council member, or someone from the triage & review team or security team, so I'm allowed to merge. |
Read this today, the whole piece is somewhat relevant, but this section in particular, links out to some other opensource projects and their AI contribution policies. |
AI-Assisted Contributions | ||
------------------------- | ||
|
||
As explained in the :ref:`security reporting guidelines <ai-assistance>`, if |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thought: is this a bit backwards? Should the security policy reference the general contributor policy? It's a larger diff, but may make more sense to readers
[Updated template to show example below]
Trac ticket number
ticket-XXXXX
Branch description
Provide a concise overview of the issue or rationale behind the proposed changes.
AI Assistance Disclosure (REQUIRED)
Checklist
main
branch.