Commons:Village pump/Proposals

From Wikimedia Commons, the free media repository
Jump to navigation Jump to search

Shortcuts: COM:VP/P • COM:VPP

Welcome to the Village pump proposals section

This page is used for proposals relating to the operations, technical issues, and policies of Wikimedia Commons; it is distinguished from the main Village pump, which handles community-wide discussion of all kinds. The page may also be used to advertise significant discussions taking place elsewhere, such as on the talk page of a Commons policy. Recent sections with no replies for 30 days and sections tagged with {{Section resolved|1=--~~~~}} may be archived; for old discussions, see the archives; the latest archive is Commons:Village pump/Proposals/Archive/2024/01.

Please note
  • One of Wikimedia Commons’ basic principles is: "Only free content is allowed." Please do not ask why unfree material is not allowed on Wikimedia Commons or suggest that allowing it would be a good thing.
  • Have you read the FAQ?

 
SpBot archives all sections tagged with {{Section resolved|1=~~~~}} after 5 days and sections whose most recent comment is older than 30 days.

Restrict webp upload?[edit]

https://commons.wikimedia.org/w/index.php?sort=create_timestamp_desc&search=filemime%3Awebp

i suggest restricting upload of webp files to autopatrol users (like mp3), because very often webp uploads are copyvio taken from the internet or previews of svg logos. RZuo (talk) 14:07, 22 November 2023 (UTC)Reply[reply]

 Strong support second in motion to @Yann, Abzeronow, and Glrx: et.al.. Examples of my autogenerated messages of WEBP copyvios: this, this, and this. And I can still remember the very first WEBP file I encountered here, which is a copyvio itself! Commons:Deletion requests/File:Beijing Skyline.webp. JWilz12345 (Talk|Contrib's.) 08:17, 26 November 2023 (UTC)Reply[reply]
  •  Support Would reduce copyvios for sure; I'm not sure the proportion is as high as some have mentioned based on spot checking, but I usually check the ones that look obvious so it's not exactly a random sample. Gnomingstuff (talk) 23:05, 29 November 2023 (UTC)Reply[reply]
  •  Oppose I think in general, discriminating on filetype is a bad direction (same with mp3). It further complicates and obfuscates the upload process and doesn't stop copyright violations, it stops contributors. Most of these can easily be spotted by filtering the upload list on new contributors. Or we can just ban SVGs as well, because most logos are copyvios. —TheDJ (talkcontribs) 18:46, 30 November 2023 (UTC)Reply[reply]
    If we would have enough people checking the unpatrolled uploads we would not need such filters. Unfortunately we do not have enough people checking uploads and edits and therefore need tools to reduce the workload. GPSLeo (talk) 19:31, 30 November 2023 (UTC)Reply[reply]
    I think that creating these kinds of non-transparent and highly confusing roadbumps is part of the reason WHY we don't have enough people. That's my point. And I note that just two posts below this we already have someone getting tripped up with the SVG translator software because of a similar rule #File overwriting filter blocks SVG Translate. It's one of those 'a small cut doesn't seem so bad, until they are a thousand cuts"-kind of problems. Considering how much ppl complain about UploadWizard, stuff like this isn't helping lower the barrier to entry either. —TheDJ (talkcontribs) 11:07, 9 December 2023 (UTC)Reply[reply]
    Plus we could just make patrolling itself easier by having uploads sorted per date, a single patroller can simple take a few minutes to patrol all new ".webm" files. Do this for every file type and we don't need to exclude people from uploading. If a patroller only wants to patrol videos, sounds, PDF's, Etc. they now have to go through all uploads, but by making it easy to filter out and making these pages easily accessible to everyone and transparent (like OgreBot's Uploads by new users) we could easily patrol everything with fewer people. --Donald Trung 『徵國單』 (No Fake News 💬) (WikiProject Numismatics 💴) (Articles 📚) 11:55, 9 December 2023 (UTC)Reply[reply]
 Support. Very few cameras or image editing tools output WebP images; when one is uploaded, it's almost always because it was downloaded from a web site which automatically converts images to that format for display (and, thus, is very likely to be a copyright violation). We already have abuse filters which block other types of uploads from new users which are overwhelmingly likely to be problematic, like MP3 files (Special:AbuseFilter/192), PDFs (Special:AbuseFilter/281), and small JPEGs (Special:AbuseFilter/156). Omphalographer (talk) 04:25, 3 December 2023 (UTC)Reply[reply]
  •  Oppose, per TheDJ. Additionally, this would exclude a lot of people who contribute to other Wikimedia websites but aren't necessarily active here, a user could be a trusted user, an admin, or a prolific contributor, Etc. on another Wikimedia website and "a noob" at the Wikimedia Commons. They could have good knowledge of how video files work and which ones are and aren't free, but they will find that they can't upload anything here. If we keep making the Wikimedia Commons more exclusive we will fail at our core mission to be for all Wikimedians. If new users are more likely to have bad uploads then we should have a page like "Commons:Uploads by unpatrolled users by filetype/.webm/2023/12/09" (which includes all users who aren't auto-patrolled), this will simply exclude too many people. We can't know which people and uploads we exclude because a user with a free video file will come here, attempt to upload, see "You have insufficient privileges for this action", and then never return (without ever telling anyone what (s)he wanted to upload and why they didn't). "Anyone can contribute to" is the core of every Wikimedia website, the moment you compromise this you lose whatever made this place Wikimedia. --Donald Trung 『徵國單』 (No Fake News 💬) (WikiProject Numismatics 💴) (Articles 📚) 11:49, 9 December 2023 (UTC)Reply[reply]
  •  Strong oppose, outlawing a file format will just lead to such files being converted into a different format, and be uploaded in a different way - but now with less possibilities to scan and patrol for it. This is classic prohibition: By outlawing X, users of X will find new ways to still do it, but in places where it can no longer be observed easily. I'm not even arguing in favor of the allegedly "just" 10% .webp images that are in fact okay, which is a valid concern as well in my opinion. So: Use this helpful file format to scan more efficiently for copyvios, rather than outlaw it and have the copyvios enter Commons nonetheless but via still uncharted routes. --Enyavar (talk) 15:25, 18 December 2023 (UTC)Reply[reply]
  •  Comment Giving that WebP files are essentially Google replacements of JPGs, PNGs, and GIFs, we cannot restrict the WEBP uploads into autopatrol users until we restrict the uploads of these three formats too (as well as SVG, even for own uploads), because if a non-patrolled users restricted their WEBP uploads, they would easily convert these webp files to PNG or JPG as a way to upload these images into Commons. We should find a way to close the loopholes of new users to convert webp files to a different image format before we can restrict the WEBP uploads to users with autopatrol rights, even with its own user's webp uploads. Yayan550 (talk) 15:33, 2 January 2024 (UTC)Reply[reply]
    • @Yayan550: I think you are missing the point here. Of course if they know what they are doing they can convert the file. The idea here is sort of a "speed bump" for a pattern that usually indicates someone who is ignorantly uploading a copyright violation. - Jmabel ! talk 19:24, 2 January 2024 (UTC)Reply[reply]
      Precisely. And, as I noted above, we already have AbuseFilter "speed bumps" for other types of uploads, like MP3 files, which are particularly likely to be copyvios. We're aware that users can bypass the filter and upload those files after conversion, but we can explain why an upload is being blocked in the AbuseFilter message (cf. MediaWiki:abusefilter-warning-mp3), and we can review the filter logs to see if users are deliberately bypassing the filter for infringing content. Omphalographer (talk) 21:24, 9 January 2024 (UTC)Reply[reply]
  •  Support Infrogmation of New Orleans (talk) 20:01, 9 January 2024 (UTC)Reply[reply]
  •  Support The issue seems similar to MP3 files. It's about a practical approach based on experience. No one, I assume, has anything against MP3 or WEBP as file types in principle, but it's just a matter of fact that Commons uploads of these file types tend to be copyvios more often than others, so a measure similar to the MP3 upload restriction already in place seems only sensible. The proposal is not about "outlawing" the format but restricting it to autopatrol users. Gestumblindi (talk) 14:22, 14 January 2024 (UTC)Reply[reply]
    •  Comment as I detailed above, this will only result in circumvential behavior by circumstantial users (those who upload ~20 files once and never again). So yes, it will bring the DETECTED number of copyvios down. --Enyavar (talk) 10:11, 29 January 2024 (UTC)Reply[reply]
 Support will bring the number of copyvios down. Only <20% of copyright violating users will actively evade by using an online file converter. Also I doubt that many competent users would use a WebP as a file format, most would use png/jpg/svg. —Matrix(!) {user - talk? - useless contributions} 16:53, 23 January 2024 (UTC)Reply[reply]
 Support yes, per Yann. Hide on Rosé (talk) 08:01, 4 February 2024 (UTC)Reply[reply]

Disabling talk pages of deletion requests[edit]

While there now exists Template:Editnotices/Group/Commons talk:Deletion requests that notifies users to make comments on the deletion request pages themselves, it is evidently ignored, as seen in 54conphotos' comments on the talk page of Commons:Deletion requests/File:KORWARM2.jpg (which I transferred to the main page and in Amustard's comment on a Turkmen deletion request which I subsequently transferred to the mainspace. As it is very evident that the edit notice is being ignored, I am proposing that the "Talk" namespace be disabled in all pages with prefix "Commons:Deletion requests/". This should be a permanent solution to the incidents that should have been better avoided. For existing talk pages of deletion requests with comments, the comments (including mine if ever I had responded to uploaders in the talk page namespaces) should be transferred to the deletion requests mainspaces, with consideration to the dates of the comments or inputs. JWilz12345 (Talk|Contrib's.) 09:10, 26 November 2023 (UTC)Reply[reply]

 Support At least, the use of DR talk pages should restricted to power users (admins, license reviewers?). Yann (talk) 09:37, 26 November 2023 (UTC)Reply[reply]
@Yann that may be OK. Restricted to admins and license reviewers. Or the talk pages are still existing visually but those who don't have user rights, even autopatrolled ones, will be barred from editing talk pages and be presented with a boilerplate notice that they don't have the right to edit talk pages and should instead comment on the main discussion page, with a link to the DR itself in the notice (do not expect several new users to comprehend what they are reading in the notices). JWilz12345 (Talk|Contrib's.) 10:09, 26 November 2023 (UTC)Reply[reply]
 Support --Krd 11:23, 26 November 2023 (UTC)Reply[reply]
 Support Christian Ferrer (talk) 11:56, 26 November 2023 (UTC)Reply[reply]
Thank you for pointing out this Template:Editnotices/Group/Commons talk:Deletion requests location in Wikimedia. This was not ignored as you said in your comment, it simply was no where to be found at the time I commented. It's a shame it's too late to place a comment there as I would have done so. Even your notes to me are very confusing as the names of Comments pages do not match up so I can find them as are all the previous notes received by others. Being new to this platform, I have found it very confusing to find things that are suggested when seeing comments by others.
Hopefully I will have the hours to research and better understanding of the workings of Wikimedia Commons in the future. Thanks again! 54conphotos (talk) 13:32, 26 November 2023 (UTC)Reply[reply]
 Support or, if it's easier, systematically turn them into redirects to the relevant project page. - Jmabel ! talk 21:56, 26 November 2023 (UTC)Reply[reply]
 Support --Adamant1 (talk) 00:35, 27 November 2023 (UTC)Reply[reply]
 Support. Some good ideas above from Yann and Jmabel. We could also explore autotranscluding them to the bottoms of the DR subpages themselves.   — 🇺🇦Jeff G. please ping or talk to me🇺🇦 00:49, 27 November 2023 (UTC)Reply[reply]
 Support. Yes, good idea, esp. with Jmabel’s and Yann’s additions. -- Tuválkin 11:34, 27 November 2023 (UTC)Reply[reply]
 Support to restrict it to anyone with autopatrol, I think these users are knowledgeable enough to know that the talk page isn't to discuss the deletion. We must create an informal and easy-to-understand AF notice though. -- CptViraj (talk) 12:19, 9 December 2023 (UTC)Reply[reply]
Another one, this misplaced comment by ApexDynamo, which I have transferred to the main nomination pages. CptViraj, I don't think even autopatrolled users are still knowledgeable enough to know that talk pages are not proper forums to comment. Example: misplaced comments by Exec8 (which I also transferred soon after initiating this proposal). I suggest the use of those talk pages must be restricted to admins/sysops and license reviewers. JWilz12345 (Talk|Contrib's.) 09:38, 14 December 2023 (UTC)Reply[reply]
Still, rare cases for autopatrollers. IMHO we shouldn't unnecessarily take away the power completely, the problem is mainly caused by newbies/non-regulars. -- CptViraj (talk) 18:13, 23 December 2023 (UTC)Reply[reply]
 Support I have never used a talk page of a DR nor have I seen one being used. The DRs are usually also frequented by very few editors and the comments can easily be distinguished from one another.Paradise Chronicle (talk) 22:13, 30 December 2023 (UTC)Reply[reply]
One more problematic use, by @Balachon77: (see this). JWilz12345 (Talk|Contrib's.) 01:00, 8 January 2024 (UTC)Reply[reply]
Another problematic use, by SiCebuTransmissionCorrecter (talk · contribs) – Commons talk:Deletion requests/File:Line construction of Hermosa-San Jose Transmission Line. The line constructs above Hermosa-Duhat-Balintawak transmission line.png. JWilz12345 (Talk|Contrib's.) 00:10, 9 January 2024 (UTC)Reply[reply]
no no no no no no! SiCebuTransmissionCorrecter (talk) 01:12, 10 January 2024 (UTC)Reply[reply]
Commons_talk:Deletion_requests/File:Afrikan_och_Afrikanska_x_Ingel_Fallstedt.jpg ? DS (talk) 14:50, 22 January 2024 (UTC)Reply[reply]

Computer generated images used in the contests[edit]

Hello, I have been suggested to open a new topic here *this question was previosly asked in the Help desk. I have read that there is a topic wich discusses about AI images here, but it does not speak about using them in contests. As amateur photographer, i like to join contests. I use my own photographs taken with my Canon camera. I would like to make sure that only our own images taken as "humans" and not generated by AI are participating in the contests. - Is there any one of the admins or moderators who vet the pictures in the contests? - Are we all ensured that no AI Pictures are becoming part of the list of pics that compete in the contest? - What happens if some of us check the pics and see that there are some AI pictures in the contest? - Can we report them or those pics are fully allowed in the competition? (i believe not, but i ask just in case). Wikimedia does not explicitly forbids the usage of AI but i found an implicit statement as you see in "Photo Challenge" page info. It says about "own work, or "pictures taken by a common users", hence here comes my question : Can we set an "explicit" rule instead on the wikimedia commons contest? Thanks for the info that i believe are quite useful to know. Oncewerecolours (talk) 20:30, 20 December 2023 (UTC)Reply[reply]

This amounts to a proposal to block AI images from being entered into contests, and therefore from winning.   — 🇺🇦Jeff G. please ping or talk to me🇺🇦 20:36, 20 December 2023 (UTC)Reply[reply]
thansk for taking this in consideration Oncewerecolours (talk) 20:39, 20 December 2023 (UTC)Reply[reply]
@Oncewerecolours: The scope of this proposal seems unclear. First, the title says "computer generated images" but the rest of your text refers to "AI images" and "AI pictures". Which do you intend to forbid? Second, which contests should be covered? You mentioned the Photo Challenge. Other obvious candidates would be the various Wiki Loves contests. Then there are valued images (kind of competitive), Commons:quality images, and Commons:featured pictures. Featured pictures are complicated because while non-competitive they do feed into Picture of the day and Picture of the Year. Would this also affect awards from other projects, like English Wikipedia's featured pictures and picture of the day? --bjh21 (talk) 21:49, 20 December 2023 (UTC)Reply[reply]
Hi, I meant AI images, and all the images that are not "photographs", meaning images taken by an human being instead of generated by any software. This matches with the rules stated in the photo challenge info page. An AI image is not a photograph, I don't think those images should compete in the monthly photo challenges and some like "wiki loves earth", or " monuments" etc...etc...sorry if this wasnt clear! Oncewerecolours (talk) 22:03, 20 December 2023 (UTC)Reply[reply]
My opinion on "Wiki Loves" contests (again per my !vote below, these are merely recommendations to the contest organizers, as I don't think we should have any community-wide regulation on contest rules): Images generated wholly or substantially by AI should not be allowed. Image manipulations, whether done via conventional editing software or AI-enhanced software (e.g. DeNoise AI), are allowed but must not misrepresent the subject. -- King of ♥ 23:03, 20 December 2023 (UTC)Reply[reply]
Yes that is exactly what I meant. Humans take photographs using their cameras (see the symbol in the photo challenge page, a camera...), hence they are the authors. AI software generate images "artificially" , not through human eyes and cameras. Photographes are images that comes , first, from an human eye, not from a AI software. But of course this does not prevent to open separate contests for AI images, if this makes sense, but not for "photographs" part of "wiki loves earth, science, music, cars"....or "monthly photo challenges". That was my point. Nothing prevents to play the game into 2 different fields, AI contest and photographs contest. I simply dont love to see AI images in Monthly challenges, that is it, as they are NOT photographs. My 2 cents. Thanks for follow up to everyone. Oncewerecolours (talk) 10:49, 21 December 2023 (UTC)Reply[reply]
  • It does seems a bit unfair for the person who wakes up early to get a picture of a mountain at sunrise, to have them pitted against somebody who simply typed "mountain at sunrise" a few times until they got a good AI image. It feels like the teenager who uses AI to generate their homework. GMGtalk 14:07, 22 December 2023 (UTC)Reply[reply]

Block AI images from being entered into contests, and therefore from winning[edit]

  •  Support as proposer.   — 🇺🇦Jeff G. please ping or talk to me🇺🇦 20:36, 20 December 2023 (UTC)Reply[reply]
  •  Support Seems very reasonable. Gestumblindi (talk) 20:44, 20 December 2023 (UTC)Reply[reply]
  •  Support Yes. Yann (talk) 20:59, 20 December 2023 (UTC)Reply[reply]
  • NOTE that the proposal here changed after I wrote this. At the time I wrote the following the proposal did not say that AI images were to be barred from "photography contests" but from [presumably all] contests. Yes, of course if a contest is specific to photography, then it's specific to photography! - Jmabel ! talk 06:18, 25 December 2023 (UTC)  Oppose Seems to me that this is up to the people who run the contest. I could easily imagine a contest for illustrations of a particular subject-matter area, where AI-generated entries might be entirely appropriate. - Jmabel ! talk 21:10, 20 December 2023 (UTC)Reply[reply]
    Hello Jmabel, can we change the name of the topic from "Block AI images from being entered into monthly photo challenges and "Wiki loves" contests? sorry, i should have been more clear. I think that this is the issue: I didn't ask to ban the AI pics from ALL the contests. Thanks again and sorry for misunderstanding. :)
    AI can defo be used in "Best AI IMAGES" or "BEst Computer Generated Pic of the month " etc...etc. I don't have anything against it. Oncewerecolours (talk) 14:15, 24 December 2023 (UTC)Reply[reply]
    @Oncewerecolours: I wrote the topic as a simplification based on your earlier work on this subject. I would be willing to add "photography" to form "Block AI images from being entered into photography contests, and therefore from winning", would that be ok with you? More than that, I think we would need a different proposal.   — 🇺🇦Jeff G. please ping or talk to me🇺🇦 18:41, 24 December 2023 (UTC)Reply[reply]
    @Jeff G. Of course you did well as i wrote that before and you reported here, but I forgot to add the type of contests...it seems this caused a misunderstanding, I dont have anything againsta AI pics. I just asked a kind of measure to prevent future situations where some AI pics are posted in "Photography contests" like the regular ones mentioned...above. So your proposal seems fine to me.
    Thank you. Oncewerecolours (talk) 18:49, 24 December 2023 (UTC)Reply[reply]
    @Jmabel: Would it make sense to have a separate proposal specific to photography contests?   — 🇺🇦Jeff G. please ping or talk to me🇺🇦 06:30, 25 December 2023 (UTC)Reply[reply]
    • @Jeff G.: It seems that is what you've already now done here. Which is fine. As I said in my recent comment, of course it is reasonable to have a contest that is specific to photography. It is possible form Alexpl's remark below that he disagrees, but since he apparently doesn't like being pinged, I'm not pinging him. I was responding to what was written here, not to what someone may have thought but didn't write. - Jmabel ! talk 06:35, 25 December 2023 (UTC)Reply[reply]
  •  Support as a default rule for COM:PC,  Oppose as a blanket prohibition. That is, putting my Commoner hat on, I don't think we should regulate the running of individual contests as a community, but putting my PCer hat on, AI entries should be assumed to be banned from PC challenges unless otherwise stated. Likewise, truthfully described AI-generated work should not be prevented from becoming FP, and those that do become FP should not be prejudiced in the POTY contest. -- King of ♥ 21:21, 20 December 2023 (UTC)Reply[reply]
  •  Support AI images don't belong on Commons because they are fundamentally incompatible with our principles - mainly attribution and respect for copyright. However, until the rest of the community catches up with me on that point, I'm onboard with any and every effort to limit their presence. The Squirrel Conspiracy (talk) 23:05, 20 December 2023 (UTC)Reply[reply]
Brief note: how would you attribute millions (from thousands to billions) of images for one txt2img image? Are artists required to attribute their inspirations and prior relevant visual media experiences? The name 'copyright' already suggests that is about copying, not about learning from these publicly visible artworks; and art styles like 'Cubism' or subjects like 'Future cities' aren't copyrighted. The premise is unwarranted and wrong. --Prototyperspective (talk) 14:37, 22 December 2023 (UTC)Reply[reply]
If something truly shows the influence of millions of images, then it almost certainly does not have a copyright issue: it's just likely to be repetitive and unoriginal, unless it is somehow an interesting synthesis. But I think that is the least of the problems: most AI-generated content is unacceptable for the same reason most original drawings by non-notable people are unacceptable. - Jmabel ! talk 19:25, 22 December 2023 (UTC)Reply[reply]
Didn't realize it's about AI-generated images. I still oppose AI entities from entering contests. George Ho (talk) 19:57, 22 December 2023 (UTC)Reply[reply]
  •  Support I'm anticipating that allowing AI generated works in could create a lot of clutter. Bremps... 00:01, 23 December 2023 (UTC)Reply[reply]
  •  Oppose As long as AI is allowed on commons, it should be allowed in every contest. Alexpl (talk) 09:54, 23 December 2023 (UTC)Reply[reply]
  •  Oppose In my opinion, banning every tool with the label "AI" is not helpful. The educational value of works from generative AI is very limited, of course, and there may be serious and difficult issues with copyright and possibly personal rights. AFAIK, AI upscaling does not and cannot work sufficiently and leads to artifacts and partially blurred and partially oversharpened images. However, smartphones might do aggressive AI-post-processing by default. Nevertheless I understand why these techniques are not welcome. But what about "simple" noise reduction? Even Photoshop introduced an AI tool for this task and there are other tools that work nicely if post-processing is not overdone. This is just the same as with any other kind of image processing software, whereas I don't know any affordable software that can do that without either serious loss of detail or with the trendy "AI" label. And this might be a problem, because AI has a very bad reputation on Commons, which is in sharp contrast to the huge hype almost everywhere else. --Robert Flogaus-Faust (talk) 21:45, 23 December 2023 (UTC)Reply[reply]
    Please lets consider, first, what was my questions asked when i opened this topic. Please see the wikimedia commons home page at the right side of the page, the photo challenge box: what is displayed is a icon of a camera the words "take a picture...etc...etc". What i simply ask (i am relatively new of wikimedia commons so i am just trying to understand how it works here) the confirmation that AI Pics are excluded from the monthly photochallenges and the wiki loves...challenges. this is what it seems to me, indee. "Take a picture" is different than "post an AI picture in the contest". AI Pics have nothing to do with 1) with those kind of contests and 2) with "p h o t o g r a p h y". Photography is an art made by humans through their human eyes, first, (i would add and the human soul too). And please do not do this mistake of considering digital photos manipulated at the same level, the Post processing with photoshop has nothing to do with the AI concept. Photography is art. Painting is art. Sculpture is art. They are made by humans ,and hence, of course, they are not the same of the reality but they are made by humans. Even in the old style analogue photography we use (as i did in my darkroom in the past) to "mask" and "burn" the printed photos to hide details, that is an accepted technique to improving the picture light and detauis. So what is the problem? What I Asked here is simply to exclude those pictures from that kind of contest becasue they are not photographs. My subsequent questions is: what happens if an AI picture is voted and wins the contest? Will it be confirmed as winner?????? or some could intervene. I dont think they should join the contests. that is all.? Please do stay on the initial topic if you could..Saying that I AM NOT asking to exclude the AI pics from WIKIMEDIA: i am asking a different thing!..thanks. Oncewerecolours (talk) 08:40, 24 December 2023 (UTC)Reply[reply]
    You are allowed "Post processing with photoshop" in those challenges? I had no idea. So have photos ever been excluded from the competition for having too much "work" done on them? If not - AI should be fine as well (The more religious aspects left aside) Alexpl (talk) 10:09, 24 December 2023 (UTC)Reply[reply]
    Well, again.... it is a different thing. Ai pics aren't photograps...no camera involve,no lenses...no human eye. See the definition of a photograph. And see the photo challenge info page guidelines. . . Oncewerecolours (talk) 10:32, 24 December 2023 (UTC)Reply[reply]
    I am sorry. I may be wrong here. And my issue is not with entirely or partially AI-generated pics, which are very problematic. I very rarely participate in photo challenges and I have never used Photoshop. In most cases, I just crop my photos with GIMP and don't do anything else. I know that there are nature photography competitions elsewhere, where the authors must submit their original RAW files for evaluation in addition to their JPEG version to make sure that nothing was inappropriately manipulated. That is alright, but I could never participate there because my cameras are set to create JPEG images only. I am a frequent participant on Commons:Quality images candidates/candidate list, though. There you can find requests to remove dust spots, CAs, decrease noise, adjust lighting, and even (rarely) retouch photographs to remove disturbing elements and improve the composition. I would not ever do the latter on Commons, because my images are supposed to show what I photographed, not some ideal work of art. I am not sure about the relation of quality images to photo contests, but where the kind of edits described above is allowed or even requested, banning AI tools does not make much sense IMO. That said, overprocessed images and upscaled images (which includes images with artifacts by AI upscaling or by other means) are not welcome there and such images get declined. And images created by generative AI engines are banned anyway because the photographer must have an account on Commons. --Robert Flogaus-Faust (talk) 11:07, 24 December 2023 (UTC)Reply[reply]
    The human operator chooses the subject, perspective etc. in conventional photography, as well as in AI* produced pictures. *(depending on the AI program used) So voting "oppose" is still ok, I guess. Alexpl (talk) 10:47, 24 December 2023 (UTC)Reply[reply]
    So, you are saying that 1)Ai images are the same as photos taken by a human and 2) Ai pics should be allowed in the wiki love monuments, earth, science etc...and monthly challenges
    , in the same contests of the photos taken by users? Just to understand.. . Oncewerecolours (talk) 11:02, 24 December 2023 (UTC)Reply[reply]
    They are not the same: The photo guy has potentially a ton of equipment and has to move around to find motives, while the AI guy doesn´t need a camera and sits on his butt all the time. The rest of the work for both is pressing buttons and moving a mouse. But if you are unable to specify the rules of your competition, esp. what is allowed in post production, you would have to accept those AI works as well. Merry Christmas. Alexpl (talk) 14:54, 24 December 2023 (UTC)Reply[reply]
To be honest, I don't know. I do not remember participating in a Commons contest so far. I took a look and ...monthly themes are apparently proposed here. I guess regulations & stuff could be included there for each contest. Anyway, current heavy opposition to AI in Wikimedia Commons community would surely prevent AI-stuff from winning these contests, I wouldn't be much worried.... And... how can we identify AI-images in Wikimedia Commons? Is counting fingers the only method? For example, is this one created with AI or just too much post-processed? Strakhov (talk) 16:16, 24 December 2023 (UTC)Reply[reply]
Yes, and there is another issue with this file, so I raised it on the Village Pump. Yann (talk) 16:53, 24 December 2023 (UTC)Reply[reply]

Block AI images from being entered into photography contests, and therefore from winning[edit]

  •  Support as proposer, with apologies to The Squirrel Conspiracy. This is only about photography contests.   — 🇺🇦Jeff G. please ping or talk to me🇺🇦 06:51, 25 December 2023 (UTC)Reply[reply]
  •  Support As per above. The Squirrel Conspiracy (talk) 06:59, 25 December 2023 (UTC)Reply[reply]
  •  Support As per above. -- Geagea (talk) 08:59, 25 December 2023 (UTC)Reply[reply]
  •  Support per above. My vote above has been dropped in favor of this new proposal. JWilz12345 (Talk|Contrib's.) 09:38, 25 December 2023 (UTC)Reply[reply]
  •  Oppose Since AI works are not considered photography anyway, no action has to be taken. Alexpl (talk) 13:56, 25 December 2023 (UTC)Reply[reply]
    @Alexpl: Since people are likely to upload AI works and submit them to photography contests, we want to prevent that, or at least keep them from winning unfairly. By opposing, you want to let those people do that. Why?   — 🇺🇦Jeff G. please ping or talk to me🇺🇦 14:09, 25 December 2023 (UTC)Reply[reply]
    "winning unfairly" - can´t comprehend, since I don´t know the amount of competitions affected or the actual rules for them. Concerning AI: Do you fear people A) upload AI-work and categorize it as such and then enter it to a photo-contest or B), they upload AI-work, but claim it to be conventional photos and enter those to contests? "A" isn´t really a problem because the image is already labeled as AI-work and can be removed from the competition. And "B" - well, you most likely won´t be able to tell* that it is an AI-work anyway if done properly. If it´s "B", I change my vote for  Support, but since concealed AI-work may be very difficult to identify, it doesn´t really matter. *(made harder by all the post-processing apparently allowed in photocompetitions) Alexpl (talk) 17:23, 25 December 2023 (UTC)Reply[reply]
    Alexpl: I seek to disqualify both A and B. Postprocessed photos are still photos, but with defects removed or ameliorated in some way.   — 🇺🇦Jeff G. please ping or talk to me🇺🇦 17:33, 25 December 2023 (UTC)Reply[reply]
    There shouldn´t be a necessity to disqualify "A" since the uploader themself labeled the image as AI-work and therefor "not a photograph". You just need "B" and write into the rules "If a photograph is identified as an AI work, it is removed from a running competion, or, if the competion is already over, it loses the title "best image of a bug on a leaf 2024"" or whatever it is, you guys excel. Alexpl (talk) 18:07, 25 December 2023 (UTC)Reply[reply]
    @Alexpl I believe that it can happen that AI images are posted in Photo contests, disguised as "brilliant photographs". How to identify them? first clue is the lack of flaws, the perfection. The final (last but not least though) test is the lack of EXIX data. That is a cross-test that most of the times proves to be veryyy useful. My opinion, if anyone has different view please share:) Oncewerecolours (talk) 08:06, 27 December 2023 (UTC)Reply[reply]
    "exif" Oncewerecolours (talk) 08:06, 27 December 2023 (UTC)Reply[reply]
  •  Support as I remarked above, of course a photography contest is open only to photographs. - Jmabel ! talk 20:26, 25 December 2023 (UTC)Reply[reply]
  •  Support I support this more specific proposal in addition to the broader one above. Gestumblindi (talk) 12:14, 27 December 2023 (UTC)Reply[reply]
  •  Support That's also what I thought the discussion above does or may propose. Banning AI images explicitly in such contests & campaigns would be good since otherwise users could argue they didn't know generative photography wasn't allowed and didn't know about the respective categories or that they should have put this in the file description. A good example case may be images in this cat where it was somehow unclear whether or not they are photographs (it only had a Flickr tag 'midjourney') and before I intervened where located in a photography cat. --Prototyperspective (talk) 16:05, 27 December 2023 (UTC)Reply[reply]
  •  Support. This should go without saying, but just in case there was any remaining doubt - "photography" excludes all forms of computer-generated images, "AI" or otherwise. Yes, I'm aware there are some grey areas when it comes to image retouching; I also think that photographers should have the common sense to know what is and isn't appropriate, and to disclose anything borderline when submitting photos to a contest. Omphalographer (talk) 01:45, 30 December 2023 (UTC)Reply[reply]
  •  Support. Definitely, computer-generated images shouldn't be included in photography contest.--Vulcan❯❯❯Sphere! 07:15, 5 January 2024 (UTC)Reply[reply]
  •  Support --Adamant1 (talk) 11:27, 9 January 2024 (UTC)Reply[reply]
  •  Support no non-human created photographs in photography contests, or on commons for that matter Gnangarra 12:18, 9 January 2024 (UTC)Reply[reply]
    • I'm sorry, but that just strikes me as wrong. Category:Monkey selfie leaps to mind; so do most photographs from outer space except the relatively small number taken deliberately by an individual astronaut/cosmonaut. Similarly, there can by appropriate images take by security cameras. Conversely, AI rarely takes "photographs", it creates images by other means; I'd have no problem at all with something where an AI-driven robot was operating an actual camera, as long as the images were in scope, did not create privacy issues, etc. - Jmabel ! talk 19:34, 9 January 2024 (UTC)Reply[reply]
  •  Oppose in favor of the "let organizers figure it out" and per what I wrote above. There are a wide range of interpretations of "AI images". If you mean "generated wholly by AI", that should be stated clearly. Further, not all contests are identical. Certainly the overwhelming majority of photography contests should disallow AI, but I don't know that we need a blanket prohibition. — Rhododendrites talk18:42, 28 January 2024 (UTC)Reply[reply]

Allow the organizers of the contest to decide whether or not they wish to allow AI images[edit]

Hopefully at some point we can create a list of models that are only trained freely licensed images and allow for artwork created by them to a greater degree then we do with AI artwork at this point. I feel like that's really the only way forward here without disregarding copyright in the process though. --Adamant1 (talk) 07:36, 5 January 2024 (UTC)Reply[reply]
  •  Support. I support AI-specific competitions and this is a good compromise.--Vulcan❯❯❯Sphere! 07:09, 5 January 2024 (UTC)Reply[reply]
  •  Oppose The proposal to ban AI artwork specifically from photography contests is better IMO. There's no reason we can't just exclude AI artwork from photography contests while allowing it others. This would essentially take away our ability to moderate how AI artwork is used contests at all though, which I don't think is in the projects interests. --Adamant1 (talk) 11:32, 9 January 2024 (UTC)Reply[reply]
  •  Oppose event organsiors must comply with Commons requirements for all images uploaded to Commons. Gnangarra 12:16, 9 January 2024 (UTC)Reply[reply]
  •  Support But I'd go further and say that we should explicitly encourage contest organizers to articulate rules about use of AI tools. There are uses of AI that are compliant with our scope, and even some images wholly generated by AI can be considered in scope. This is the only option that isn't a blunt instrument. — Rhododendrites talk18:47, 28 January 2024 (UTC)Reply[reply]

Restrict closing contentious deletion discussions to uninvolved admins[edit]

RFCs can only be closed by uninvolved editors, but deletion discussions can be closed by any admin, even if they are heavily involved in the discussion. I propose changing "administrator" to "uninvolved administrator" in the first sentence of Commons:Deletion requests#Closing discussions. I propose adding the following sentence to Commons:Deletion requests#Closing discussions: "In cases of contentious requests, discussions should be closed by an uninvolved administrator." Nosferattus (talk) 01:55, 29 December 2023 (UTC)Reply[reply]

  •  Comment My first thought is that this seems a bit overly broad, especially given the significant problem we have with deletion request listing backlogs. I've been an admin on Commons for more than 19 years. If I started a deletion request, or commented on it, I *generally* let some other admin take care of closing it. However there have been occasional exceptions - mostly when trying to clean up months old backlogs, with no new discussion for months, and no counterarguments have been offered to what seems a clear case per Commons/copyright guidelines - I might feel it is a "SNOWBALL" that since I'm there I might as well take care of cleaning it up. I try to avoid conflicts of interest, and even appearances of conflicts. Does having commented on something inherently create a conflict of interest? (Examples: 1) a deletion request is made by an anon with vague reason - I comment that 'per (specific Commons rule) this should be deleted'. Months later I notice that this listing was never closed, no one ever objected to deletion. Is going ahead and closing it per the rule I mentioned earlier a conflict of interest? 2)Someone listed an image as out of scope. I commented, whether agreeing or disagreeing. Then someone else points out that the file is a copyright violation, which nominator and I had not noticed. Should I be prohibited from speedy deleting the copyright violation because I earlier commented on deletion for different grounds?) I'm certainly willing to obey whatever the decision is; I just suggest this could be made a bit narrower, perhaps with specific exceptions? Otherwise I fear this could have an unintended side effect of making our already horribly backed up deletion request situation even worse. -- Infrogmation of New Orleans (talk) 03:09, 29 December 2023 (UTC)Reply[reply]
    Or we could just make it so the rule only applies to DR's that have lasted for less than a month Trade (talk) 03:23, 29 December 2023 (UTC)Reply[reply]
  •  Oppose This would be a good rule if we would have enough admins but with the current amount of active admins this could increase the backlog dramatically. We maybe could implement the rule that deleting admin and the admin who declines a undeletion request can not be the same. As well as for a reopened deletion request of a not deleted file were a decline of the new request has to be done by an other admin. Both cases of course need exceptions for vandalism or the abuse of requests.
GPSLeo (talk) 12:39, 29 December 2023 (UTC)Reply[reply]
  •  Support with reservations: at the same time it's a problem when an admin doesn't participate in the discussion and doesn't directly address arguments or making rationales for deletion. This is especially problematic for discussions where there are only few votes. For example the nomination and one Keep vote (example example) that directly addresses or refutes the deletion nomination rationale as well as discussions where there is no clear consensus but a ~stalemate (if not a Keep) when votes by headcount are concerned (example). I've seen admins close such discussion (see examples) abruptly without prior engagement and so on. So I think it would be best that for cases of these two types closing admins are even encouraged to (have) participate(d) in the discussion but only shortly before closing it / at a late stage. On Wikipedia there is the policy WP:NODEMOCRACY that reasons and policies are more important than vote headcounts, especially for by headcount unclear cases but it seems like here both voting by headcount and admin authority are more important. It wouldn't increase the backlog but only distribute the discussion closing differently. Bots, scripts & AI software could reduce the backlog albeit I don't know of a chart that shows the WMC backlogs size and it wouldn't significantly increase due to this policy change.
Prototyperspective (talk) 13:16, 29 December 2023 (UTC)Reply[reply]
 Oppose Proposal is currently overly broad and would be detrimental in shortening our backlog. I don't close DRs that I have a heavy amount of involvement in except for when I withdraw ones that I had started. If I leave an opinion on whether a file should be kept or deleted, I wait for another admin to close. Sometimes though, I like to ask questions or leave comments seeking information that helps me decide on borderline cases. I'd be more supportive if this proposal were more limited. I can also agree with GPSLeo that deleting admin and admin who declines UDRs of the file should not be the same one. Abzeronow (talk) 16:54, 29 December 2023 (UTC)Reply[reply]
@Abzeronow: Do you have any suggestions or guidance for how a more limited proposal could be worded? How would you like it to be limited? Nosferattus (talk) 17:34, 29 December 2023 (UTC)Reply[reply]
 Support This should be natural. Since it itsn't to too many Admins, it needs a rule. --Mirer (talk) 17:48, 29 December 2023 (UTC)Reply[reply]
 Comment There are times when posters to UDR present new arguments or new evidence. If that is enough to convince the Admin who closed the DR and deleted the file, why shouldn't they be allowed to undelete?   — 🇺🇦Jeff G. please ping or talk to me🇺🇦 18:03, 29 December 2023 (UTC)Reply[reply]
 Oppose per Abzeronow.   — 🇺🇦Jeff G. please ping or talk to me🇺🇦 18:05, 29 December 2023 (UTC)Reply[reply]
  • @Yann: Although I appreciate your work on deletion and your opinion here, this reply comes across as completely dismissive. No one has said anything about votes. Of course discussions are closed according to Commons policies. Do you believe that admins have a monopoly on the understanding of Commons policies? Do you understand why closing a contentious discussion you are involved in could be problematic and discourage other people from participating in the process? Nosferattus (talk) 16:29, 30 December 2023 (UTC)Reply[reply]
  • Contrary to picture contests, opinions in DRs are not votes. Participants, including non admins, can explain how a particular case should be resolved compared to Commons policies, but it is not uncommon that a DR is closed not following the majority of participants. Also, seeing the small number of admins really active, it is not possible that admins exclude themselves from closing if they give their opinions. Yann (talk) 09:57, 31 December 2023 (UTC)Reply[reply]
  •  Oppose. Involved editors should not close discussions, but I'm leery of making that an absolute rule. There are times when it can be reasonable. I also do not want to encourage complaints about reasonable closures just because the closer had some involvement. Glrx (talk) 01:39, 30 December 2023 (UTC)Reply[reply]
  •  Oppose - This is presented without evidence of a problem (or even articulation of one) and without articulation of thought or analysis related to potential downsides, indeed as referenced above. Additionally, reliance on--here, increasingly use of--adjectives in governing documents is terrible practise in real life and on-site. All this would do is shift post-closure disagreement from "should [Admim] have closed this" to the even more complicated "was [Admin] 'involved'" and "is the discussion 'contentious'". Alternatively stated, to the extent this proposal seeks to limit biased closures, all it would do is provide more avenues to argue such closures are within the range of discretion for interpretation of those terms. If an admin is making inappropriate closures, raise the issue at a notice board. If a prospective admin has not demonstrated an ability to use discretion and abstain when too close to an issue, oppose their rfa. Ill-considered policy changes are not the correct approach. Эlcobbola talk 17:03, 30 December 2023 (UTC)Reply[reply]
    • "Involved" means they participated in the discussion. "Contentious" means different opinions were presented. These criteria are easy to objectively determine. I added "contentious" because other editors wanted the criteria narrowed. Nosferattus (talk) 18:16, 30 December 2023 (UTC)Reply[reply]
  •  Oppose I'd be for this if there were more people who could close discussions. There just isn't enough who can at this point to justify limiting the number even more by approving this though. Although it would be a good idea if or when there's enough users who can close deletion discussions to make up for the deficit. --Adamant1 (talk) 11:31, 31 December 2023 (UTC)Reply[reply]
  •  Support As an admin, I have always followed this as my personal policy. It simply wouldn't feel right to me to close a discussion where I was involved substantially in the discussion, giving my own opinion. When a deletion request didn't have a lot of discussion, but I have a clear opinion on the matter, I often decide to give just my opinion and leave the discussion for the next admin to decide, consequently. I agree with Mirer and think "it should be natural". However, I have encountered admins who do this, even close their own proposals deciding that a discussion went into favor of their opinion when this isn't perfectly clear. So, making this an official policy would be a good idea IMHO. I would still allow closure of discussions where the admin's involvement was only technical. Gestumblindi (talk) 15:06, 31 December 2023 (UTC)Reply[reply]
 Support It's a fair proposal and it would avoid discussions in the future. I actually thought this was already normal as I have never experienced an involved admin closing a discussion.Paradise Chronicle (talk) 17:59, 31 December 2023 (UTC)Reply[reply]
How do you define involved? I often had the case that I asked a question to the uploader and as I got no response I deleted the file. GPSLeo (talk) 18:51, 31 December 2023 (UTC)Reply[reply]
Of course I'd also see admins who become involved in a technical, formal way such as correcting mistakes in formatting or spelling, or ensuring that the uploader had enough time to defend their file should be allowed to close a DR. But in my opinion no admin should close a discussion in which they have voted in or presented an argument in support or oppose. Paradise Chronicle (talk) 19:30, 31 December 2023 (UTC)Reply[reply]
  •  Support There's zero reason admins should be closing DRs they have either voted or heavily commented in. No one expects an administrator not to close a DR where they have made a benign, meaningless comment. But there's zero reason they should be able to close one if they have participated beyond that. Especially in cases where the participation shows they are invested in a specific outcome. --Adamant1 (talk) 11:36, 9 January 2024 (UTC)Reply[reply]
  •  Oppose as per Yann and Эlcobbola. DRs are not a popularity contest. 1/ the DRs should be closed following our policies not to follow a majority of votes. 2/it is sufficiently hard to find administrators to look at some complicated DRs, and if in addition we prevent those "involved" administrators to close DRs, it would becomes harder to find "uninvolved" administrators who are able to digest long and long discussions containing 2 ,3 or more point of views. 3/if either some closing may be contencious, there is still various places where to raise potential issues (Village Pump, Village Pump/copyright, Adm Noticeboard, Undeletion Requests, ect...). 4/ To restreint freedom of movement for the (not enough) administrators who are trying to do well the job, is not a good thing IMO. Christian Ferrer (talk) 11:05, 10 January 2024 (UTC)Reply[reply]

Allow image-reviewers to delete files[edit]

In the discussion above, many editors complained that there aren't enough admins to deal with the file deletion backlog. To address this problem, I propose that we enable the delete right for the image-reviewer user group and allow image-reviewers to close deletion discussions. This would add 323 more people who could help address the deletion backlog. Nosferattus (talk) 18:34, 30 December 2023 (UTC)Reply[reply]

  •  Oppose Active image reviers with free capacity can apply as admin. --Krd 19:00, 30 December 2023 (UTC)Reply[reply]
  •  Oppose - Image reviewer is an very low standard and, in actual practise, primarily entails mere comparison of an uploaded file's purported license to licensing information at the source. There have, for example, been instances of image reviewers credulously "passing" obviously laundered licenses and/or failing to consider appropriately the multiple copyrights that can exist in derivative works. Deletion is a sensitive enough function that a greater degree of community approval should be present to assess competence in those and other issues (the LR flag is granted by a single admin, which is not adequate evaluation). Giving more users the delete button, especially based on an inadequate criterion like the LR flag, is overly simplistic and fails to understand the root cause of the issue; what is need is not more deleting users, but more participation. The majority of backlogged DRs relate to complex issues that have had little to no discussion. More participation by all users there--rather than, say, here--would allow existing admins to assess consensus and act. How many of those 323 reviewers have opined at, say, requests in Commons:Deletion requests/2023/09? Almost none? Эlcobbola talk 19:18, 30 December 2023 (UTC)Reply[reply]
  •  Oppose per above. The Squirrel Conspiracy (talk) 02:38, 31 December 2023 (UTC)Reply[reply]
  •  Oppose per elcobbola. Glrx (talk) 02:46, 31 December 2023 (UTC)Reply[reply]
Eventual  Oppose, deletion closures are best handled by exceptional users who are prudent in decision-making (the admins). We have a much more severe backlog at COM:Categories for discussion, and I think autopatrolled users should have the right to delete categories if the CfD results to deletion of a certain category to enable category move. (Must I open a proposal on this as a new section here?) JWilz12345 (Talk|Contrib's.) 19:16, 10 January 2024 (UTC)Reply[reply]

I withdraw the proposal. Anyone have any other ideas for addressing the problem? Nosferattus (talk) 04:27, 31 December 2023 (UTC)Reply[reply]

  •  Support - As there is a separate user group that handles copyright and reviews uploads, which is only included in the admin toolset, the community trusts them with reviewer access. Therefore, I believe that the delete access should also be included in the reviewer group. Thank you.--C1K98V (💬 ✒️ 📂) 05:48, 31 December 2023 (UTC)Reply[reply]
  •  Comment Is there any reason delete access can't be granted on a case-by-case basis like is now being done for people who want to overwrite files? --Adamant1 (talk) 11:14, 31 December 2023 (UTC)Reply[reply]
    Delete access is already granted on case-by-case basis via Commons:Administrators/Requests. It's not the project goal to make procedures and policy set as complicated as possible. Krd 11:29, 31 December 2023 (UTC)Reply[reply]
  • The proposal is already withdrawn, so I think there is no need to formally oppose it now, but just adding my two cents: Deciding deletion discussions and deleting files is a central part of admin rights and requires the kind of experience on Commons that we usually see as grounds for granting these rights - so, if someone thinks they're experienced enough to decide deletion discussions, they should simply start a request for adminship, as Krd says. Also, I think there is currently no technical way to separate deletion rights from the undeletion right, with which comes the ability to view "deleted" files (which actually aren't deleted technically, but visible only to admins), and this group shouldn't be made too large for legal reasons (it's already questionable to not actually "hard-delete" images which were deleted e.g. for copyright reasons, and only somewhat justifiable by restricting access to a small group, that is, admins). Gestumblindi (talk) 15:00, 31 December 2023 (UTC)Reply[reply]
    • @Gestumblindi: "only somewhat justifiable": it's entirely justifiable on that basis. Remember, the legal aspects of "fair use" easily let us host content on that basis for a highly restricted audience. Quite likely, as an educational site we could host most files (and certainly all that are used legitimately in any of the Wikipedias) publicly on that basis if that were our policy, because our site is educational. The exclusion of "fair use" files from Commons is largely a policy issue, not a legal issue. - Jmabel ! talk 19:52, 31 December 2023 (UTC)Reply[reply]
      Thanks, Jmabel, I tend to looking at legal aspects from my European perspective where we don't have the US fair-use provisions (therefore, for example, German-language Wikipedia doesn't accept "fair use" either), but of course you're right that, if you consider fair use, wider access to "deleted" (flagged as deleted) files shouldn't be that much of an issue copyright-wise (and as Bjh21 points out, it seems that it would be possible to grant deletion without undeletion rights, though this would create new issues, will answer to that below). There are, of course, still images that are deleted for other reasons than copyright, such as personality rights, and in these cases, fair use doesn't help us. Wide access to files deleted because of privacy concerns, for example, could be an issue. Gestumblindi (talk) 09:16, 2 January 2024 (UTC)Reply[reply]
    Point of information: mw:Manual:User rights doesn't say that delete depends on undelete (or any other right), so I think it should be technically possible to grant just delete to licence reviewers. And meta:Limits to configuration changes notably lists only "Allow non-admins to view deleted content" as a prohibited change, and not allowing non-admins to delete pages. --bjh21 (talk) 18:49, 31 December 2023 (UTC)Reply[reply]
    @Bjh21: Thank you, that's good to know. However, I think that granting only the "delete" right without "undelete" (and thus without the ability to view deleted content) would create new issues, too. People with that delete-only right couldn't review their own deletions (except if it would be possible and allowed to let them only view content they deleted themselves?)... Gestumblindi (talk) 09:19, 2 January 2024 (UTC)Reply[reply]
    Indeed, I was only commenting on your "no techincal way" claim. I agree that in general it's a bad idea to give someone the ability to do something they can't undo. --bjh21 (talk) 15:06, 2 January 2024 (UTC)Reply[reply]
    Why couldn't they just contact an admin and have them undelete the file in the rare cases where they would need to? That would still be less work then the current system. Although it seems like undeleting files would be a non-issue if they were only closing DRs with clear outcomes to begin with. --Adamant1 (talk) 15:23, 2 January 2024 (UTC)Reply[reply]
  •  Support of course Юрий Д.К 19:34, 3 January 2024 (UTC)Reply[reply]
  •  Oppose, unless image-reviewers get vetted in the same way as administrators I don't see why they should be able to delete files. Having more eyes on files can help, the issue with the current system isn't that it's a bad system per se, rather it's understaffed. Perhaps we could split administrators into more user groups in the future (in fact, I very much encourage it), but the two (2) user rights of blocking people / accounts and deleting pages are the only rights that need to be exclusive to administrators. --Donald Trung 『徵國單』 (No Fake News 💬) (WikiProject Numismatics 💴) (Articles 📚) 00:01, 8 January 2024 (UTC)Reply[reply]
    @Donald Trung not only understaffed, but also very contentious deletion requests involving copyright on objects the photos or videos show. In my personal perspective, much of the deletion discussions on freedom of panorama are actually avoidable, if the FoP rules of more than 100 countries we treat today as having no-FoP become fit for new media/Internet age. Therefore, there is substantial lesser number of deletion requests to deal with as the likes of Burj Khalifa, Wisma 46, Bayterek Tower, Malacañan Palace, or N Seoul Tower would have become acceptable for commercial license hosting here. Perhaps the remaining DRs may concern public monuments and landmarks from countries that seem anti-FoP, like France, Costa Rica, Argentina, and Ukraine. This is just my personal POV regarding the great number of DRs that are actually avoidable. JWilz12345 (Talk|Contrib's.) 00:58, 8 January 2024 (UTC)Reply[reply]
    • @JWilz12345: I'm sorry, maybe I missed your point, but are you just saying that we'd have fewer DRs if more countries had liberal Freedom of Panorama? Or are you saying something else? In particular, are you saying something that has bearing on this proposal? - Jmabel ! talk 05:19, 8 January 2024 (UTC)Reply[reply]
      @Jmabel that is just my insight, and yes a substantial share of DRs concerns derivative works, and a share of DW DRs concerns FoP-related issues. Before it was common to nominate Russian buildings and Belgian monuments, but ever since more liberal FoP rules were implemented in both countries, there is little share of DRs concerning Russian buildings and Belgian monuments. There is a slight reduction of the number of DRs (improper DRs targeting works can be speedily kept) as a result, slightly reducing some backlog being experienced. I have seen some of the most-overused DRs here, concerning: Louvre Pyramid and Hassan II Mosque (but I don't expect France and Morocco will embrace Wikimedia-friendly FoP rules anytime soon). JWilz12345 (Talk|Contrib's.) 10:05, 8 January 2024 (UTC)Reply[reply]
      • We can only follow the law, not write it. I'm sure that at least 95% (ninety-five percent) of contributors would want more liberal copyright ©️ laws to allow more educational content, but the truth is that pro-FoP lobbying is slow and oftentimes unproductive. As much as I would want all of us to become more politically active and create more lobbying organisations (in fact, not too long ago I proposed the creation of "Commons:Lobby" to organise such actions), admins must enforce these laws and these images may not be hosted publicly until the laws change (then we can undelete entire categories of images). --Donald Trung 『徵國單』 (No Fake News 💬) (WikiProject Numismatics 💴) (Articles 📚) 22:37, 9 January 2024 (UTC)Reply[reply]
It's just my opinion as a lay person, but there's at least a couple of countries outside of the United States where users could embrace fair use if they wanted to. There just doesn't seem to be any will on their part to do it though. Understandably, because it's much easier to just upload images here and then blame other people if they are deleted then put the time and effort into managing things themselves on their end. I'm sure there's plenty of countries out there were we (or more importantly Wikipedia) could take a much more lax stance without running into problems if there was just the will to do it though. 99% untested and extremely low risk to begin with anyway. Except religions need their theologies. --Adamant1 (talk) 23:06, 9 January 2024 (UTC)Reply[reply]
@Donald Trung a good start is a page I started at meta-wiki: meta:Freedom of Panorama. It should begin kicking off things that pro-liberal FoP advocate need. Anyone can also contribute that page. JWilz12345 (Talk|Contrib's.) 23:10, 9 January 2024 (UTC)Reply[reply]
@Adamant1: Certainly there are places where we could get away with a lot, especially for use only within an educational project like Wikipedia. But (at least as far as Commons is concerned) that's not the point. The point is that for images that are copyrighted we try to confine ourselves to images where, as long as reusers comply with the offered license, they (the reusers) won't be in trouble, not just that we won't be in trouble.
If we really wanted to change this policy: the one thing we could, in principle, change would be to allow some content with NC licenses. There are many countries that have FoP for non-commercial use, even though they limit commercial use. But I also understand why, early on, we decided not to allow NC licenses: we wanted to encourage people to use freer licenses than that. I'd guess that many of our larger contributors of original work would opt for NC if they could stay involved in the project and stick to NC licenses. I probably would: I'm sure I've cost myself thousands of dollars by offering such free licenses on all of my work. Of course, I've also made that work tremendously more available, and given it a far wider reach. - 19:05, 10 January 2024 (UTC) — Preceding unsigned comment added by Jmabel (talk • contribs)
  •  Comment IIRC, in previous discussions (here? somewhere else?) there was an issue of the WMF being unwilling to separate delete from undelete, and for legal reasons, we can't grant undelete to users who have not passed some RfA like process. GMGtalk 14:07, 12 January 2024 (UTC)Reply[reply]

no include categories for DR[edit]

Is there a way on how to add categories with the no include for DRs with hot cat? I have been helping out in adding categories to DRs with hot cat but there no such category appears. Maybe there is a hidden category for it? If not, is there another solution? Paradise Chronicle (talk) 22:32, 30 December 2023 (UTC)Reply[reply]

In 2017 I believe a solution to the issue was requested before, but there was no answer.Paradise Chronicle (talk) 13:28, 31 December 2023 (UTC)Reply[reply]
@Paradise Chronicle yes, indeed no responses before archival. JWilz12345 (Talk|Contrib's.) 11:25, 4 January 2024 (UTC)Reply[reply]
 Strong support, so that I do not need to resort to two tedious things: copy a certain <noinclude>XXXXX FOP cases/yyyyy</noinclude> and paste it to DR pages while having the JavaScript of my mobile browser turned off (to avoid any issues in text formatting as the Wiki text editor seems to treat a few types of copied texts as formatted text and not plaintext). Or, in launching deletion requests, forced to select "edit source" and type the same category wiki-code. JWilz12345 (Talk|Contrib's.) 12:41, 4 January 2024 (UTC)Reply[reply]
 Support That is a technical request and thus should go into phabricator, the technical requests page and/or Commons:Idea Lab. Prototyperspective (talk) 14:15, 4 January 2024 (UTC)Reply[reply]
HotCat is a Javascript tool created and maintained locally at Commons. It isn't part of MediaWiki, and changes to it don't require intervention by a WMF developer. Omphalographer (talk) 05:27, 6 January 2024 (UTC)Reply[reply]
Phabricator isn't just for WMF developers. I just checked and indeed HotCat issues are not at phabricator. I think HotCat should be part of the default software and its issues be tracked in a proper issue tracker, preferably the Wikimedia's main one. So it seems for now it would need to be proposed at Help:Gadget-HotCat if it's to be implemented via HotCat. Prototyperspective (talk) 10:55, 30 January 2024 (UTC)Reply[reply]
 Comment Just to make sure I understand: (1) any time a category is added to an individual DR with HotCat, we always want it inside of a <noinclude> element and (2) We can identify a page as a DR because its name begins with "Commons:Deletion requests/" and what follows that is not of the form "dddd", "dddd/dd", or "Archive/dddd/dd" (where each 'd' is a digit) or (to cover translations of Commons:Deletion requests) 'aa' or 'aaa' (where each 'a' is one of the 26 lowercase letters in original ASCII). Are there other exceptions that would need to be made besides those five forms? - Jmabel ! talk 20:29, 4 January 2024 (UTC)Reply[reply]
@Jmabel: I don't see a use case for live cats in pages with what follows of the form "dddd", "dddd/dd", or "Archive/dddd/dd" (where each 'd' is a digit).   — 🇺🇦Jeff G. please ping or talk to me🇺🇦 06:50, 5 January 2024 (UTC)Reply[reply]
I was about to answer but also afraid to show off my ignorance. Now that Jeff G. also doesn't seem to know I have some courage and admit I am afraid I can't answer you part two of the question. I am editing mainly in visual mode and even after your explanation I have no idea what "dddd/dd" means. But I would be very glad to have categories that already have the .... and are detectible with hot cat so I do not have to resort to the several editing steps similar as described by JWilz12345. Paradise Chronicle (talk) 06:59, 5 January 2024 (UTC)Reply[reply]
@Paradise Chronicle: I know what most of them are, I just don't see the use case. For instance, Commons:Deletion_requests/2016 appears to be a badly-named one-off, Commons:Deletion requests/2024/01 contains this month's active DRs, Commons:Deletion requests/2024/01/05 contains today's active DRs, and Commons:Deletion requests/Archive/2024/01/04 contains the DRs started yesterday and already archived because the subject page(s) were speedily kept or speedily deleted. Tracking down why pages like Commons:Deletion requests/2024/01 are categorized is an exercise best left to the reader (historically, this is because people are not as careful with noinclude as JWilz12345 is).   — 🇺🇦Jeff G. please ping or talk to me🇺🇦 07:36, 5 January 2024 (UTC)Reply[reply]
@Jeff G.: do I understand that you are saying that, functionally, these exceptions are unnecessary, because it would be fine if the rule of adding a <noinclude> element also applied to these? That's fine with me. Might this even be OK to apply this to the language-specific pages? I think it would be. The original proposal was specific to DRs, and I was concerned with how you could technically identify a DR. But, yes, it's simplest if you can just say that anything that begins with "Commons:Deletion requests/" follows this rule. - Jmabel ! talk 19:59, 5 January 2024 (UTC)Reply[reply]
@Jmabel: Yes, it seems so.   — 🇺🇦Jeff G. please ping or talk to me🇺🇦 01:14, 6 January 2024 (UTC)Reply[reply]
Deletion requests posted by the mobile app include those tags automatically btw, they have <noinclude>[[Category:MobileUpload-related deletion requests]]</noinclude> as part of the DR. That also automatically changes to <noinclude>[[Category:MobileUpload-related deletion requests/deleted]]</noinclude> when I close the DR as deleted, so there must be some code somewhere doing this. --Rosenzweig τ 09:08, 22 January 2024 (UTC)Reply[reply]

Start File Navigation from Current Page in Large Categories[edit]

Large categories, such as Category:Scans from the Internet Archive, pose an issue when users click the category link from a file page like File:The Gull (IA v17n1gullv17ngold).pdf. Currently, it always starts from the first file in the category. However, users are more likely to want to see files around the current file. Therefore, can we modify the link to direct users to Category:Scans_from_the_Internet_Archive&filefrom=v17n1gullv17ngold? This adjustment would provide more relevant file links for the user.

To implement this, I propose the introduction of a MediaWiki magic word like __STARTFROMCURRENTPAGE__. When added to category pages, this magic word would ensure that when users click the category link from a file or other types of pages, it will start from the page's sort key.

It's important to note that Wikimedia Commons differs from Wikipedia, as pages are not interlinked. Consequently, many pages are not indexed by Google due to a lack of links from other pages. Implementing this change and allowing /w/index.php?title=Category in robots.txt would create more interlinks, potentially leading to increased file indexing.

維基小霸王 (talk) 02:50, 2 January 2024 (UTC)Reply[reply]

Since this feature would require changes to MediaWiki, you should probably ask at m:Phabricator, not here.
For what it's worth, this change would likely make search indexing worse, not better - each file would link to a slightly different page within the category, creating a larger number of redundant pages to be indexed. Omphalographer (talk) 19:31, 2 January 2024 (UTC)Reply[reply]
It is pretty easy to add sane navigation to the category page, if the images named (or sorted by sortkey) after a pattern that imposes order. - Jmabel ! talk 23:57, 2 January 2024 (UTC)Reply[reply]
This is possible for middle-sized categories, but not for very big categories like what I have mentioned. [1] 維基小霸王 (talk) 01:18, 3 January 2024 (UTC)Reply[reply]
I can see why that would be tough at that scale. So basically, what you'd want is to be able to set things up so that if the file's sortkey (by default the filename) is FOO and it is in Category:BAR, you'd like an easy way to get to https://commons.wikimedia.org/w/index.php?title=Category:BAR&filefrom=FOO. I'm not 100% sure that is desirable as default behavior, but I can see why it would be nice to have a choice of that mode. I think it should be possible to achieve that client-side with a user script. - Jmabel ! talk 02:31, 3 January 2024 (UTC)Reply[reply]
Part of the problem here seems to be that these files have DEFAULTSORT set to unhelpful values (the Internet Archive file ID). Removing those might improve matters. Omphalographer (talk) 02:32, 3 January 2024 (UTC)Reply[reply]
Yes, if you are not sorting the category in the order you want it, you'll have quite a problem getting what you want. On the other hand, I think that particular DEFAULTSORT is going to keep the pages of a book together pretty much as you'd like them to be.
In the example I gave above, the HTML for the category link would currently be <a href="/wiki/Category:BAR" title="FOO">BAR</a>, which is pretty tractable to massage in script if what you want is to produce <a href="/w/index.php?title=Category:BAR&filefrom=FOO" title="FOO">BAR</a>. Jmabel ! talk 03:20, 3 January 2024 (UTC)Reply[reply]
@Omphalographer: I guess Phabricator would require a local consensus first?
Presently, only the first page of every category was allowed to index. Maybe more should be allowed to index on Wikimedia Commons for more links to files. Any better ideas? 維基小霸王 (talk) 01:19, 3 January 2024 (UTC)Reply[reply]
I like this idea. -- Tuválkin 05:08, 6 January 2024 (UTC)Reply[reply]
So I think no one will object if I propose a magic word on Phabricator to make optional start from the page's sort key? --維基小霸王 (talk) 02:32, 7 January 2024 (UTC)Reply[reply]
I think you'd do better to indicate simply that you want a way to go into a category and start from the page's sort key, rather than dictate to the developers how you want it done. As I said above, I think it would be pretty simple to do this client-side with a user script, so it may just be a "gadget". - Jmabel ! talk 03:34, 7 January 2024 (UTC)Reply[reply]
Thank you. I will make a proposal. 維基小霸王 (talk) 01:07, 8 January 2024 (UTC)Reply[reply]

Put "Uploads" into mobile menu[edit]

  1. Visit https://commons.m.wikimedia.org/ with your cellphone, and login.
  2. On the Commons home page, you will notice a big blue button in the middle of the screen: "Upload".
  3. Now tap the person icon in the upper right.
  4. You will see in the menu "contributions" but no "Upload".

In step 2 we have learned that Upload is a very important function. But for no good reason one cannot check ones uploads from the mobile menu. One needs the desktop menu, or entering the Uploads URL directly. Jidanni (talk) 04:00, 6 January 2024 (UTC)Reply[reply]

I may be confused, but isn't #2 about uploading a file and #4 about seeing the Special:MyUploads page? Are you saying it should be possible to start an upload from any page, or that it should be possible to easily see Special:MyUploads? - Jmabel ! talk 07:45, 6 January 2024 (UTC)Reply[reply]

Ban the output of generative AIs[edit]

Now we know that Artificial Intelligences are being trained on modern nonfree works. Please read this: Generative AI Has a Visual Plagiarism Problem > Experiments with Midjourney and DALL-E 3 show a copyright minefield, by Gary Marcus and Reid Southen   — 🇺🇦Jeff G. please ping or talk to me🇺🇦 10:11, 9 January 2024 (UTC)Reply[reply]

  •  Support At least if the output is generated by Midjourney, if not also Dall-E. Although the later seems to be less susceptible to it, but at the end of day both were trained on nonfree works. So there's a risk of creating duratives with either one. It's not like we can't allow for images generated by models that were trained on free licensed images if or when there are any either. But allowing from a model that clearly disregards copyright, apparently even when someone uses a benign prompt, is just asking for trouble. Not to mention it's also antithetical to the projects goals. I don't think a full ban on anything generated by AI what-so-ever, regardless of the model or type of output, would really be workable though. At the end of day things like image up-scaling and colorization are probably not harmful enough to justify banning them. --Adamant1 (talk) 10:39, 9 January 2024 (UTC)Reply[reply]
  •  Strong Support as proposer, obviously.   — 🇺🇦Jeff G. please ping or talk to me🇺🇦 10:53, 9 January 2024 (UTC)Reply[reply]
  •  Strong oppose for a general ban on everything that an AI is involved, as the title of this section might suggest. I doubt that "AI" denoising or sharpening can cause a copyright problem. AI colorization or AI upscaling yields mostly very poor results, but I cannot see the copyright problems either. I don't mind if images created by generative AI are excluded that are just based on a text prompt, possibly with very few exceptions that are needed to illustrate pages about AI. However, was an actual copyright problem identified with current AI-based uploads to Commons that is so serious or general that this requires a blanket ban for generative AI? I know, much of this might be out of scope anyway. --Robert Flogaus-Faust (talk) 11:32, 9 January 2024 (UTC)Reply[reply]
@Robert Flogaus-Faust: There's been several DRs lately involving clear duratives, including Commons:Deletion requests/Files found with insource:" happy-looking Gandalf". One of the problems here is that people who are pro AI artwork will turn every DR having to do with into an argument over AI models can't generate COPYVIO to begin with because of how many images they are trained on. It's also sort of impossible to know what is or isn't COPYVIO with AI art generators because we don't have access to the original training sets. So take something like a seemingly benign painting of a 15th century knight. We have zero way of knowing if it's an exact copy of prior artwork, a derivative of one made in the 15th century, or based on a modern painting that's still copyrighted. Since there's no source or any other way to confirm anything. The fact that there's clear instances of AI art generators creating derivatives even when people ask for them just puts the whole thing in doubt though. --Adamant1 (talk) 11:50, 9 January 2024 (UTC)Reply[reply]
What you call clear duratives are images that look not at all like Gandalf but that word was used in the prompt alongside other changes to get the AI to not create evil looking Asian people with Samurai-style hats but to create old men with wizard hats. That word is often used in high-quality fan art centric to the concept of the kind of wizard I wanted so I used that as a technique to make it produce images that more closely resemble contemporary ideas of what wizards are. And no, that they can't generate COPYVIO to begin with is not what I or anybody else I saw ever argued which should be even clearer in the explanation below. They can and such images should be deleted and have been deleted. Prototyperspective (talk) 12:50, 9 January 2024 (UTC)Reply[reply]
  •  Strong oppose That article is about what one could call 'hacking' generative AIs to reproduce parts of works they trained on. Such malicious images are difficult to create, rare, and should simply be deleted.
Moreover, training on nonfree works is allowed as much as you are allowed to view copyrighted images on artstation (or e.g. public exhibitions) and "learn" from them, such as getting inspiration and ideas or understanding specific art-styles. This is similar to human visual experience where anything you create is based on your prior experience which includes lots of copyrighted works. Various authoritative entities have clarified that AI works are not copyrighted. Like Photoshop or Lightroom, it's a new tool people can use in many ways and with very different results. It's a great boon to the public domain and not "antithetical to the projects goals" but matching it where it's finally starting to become possible to create good-quality images of nearly everything you can imagine without very high technical artistic skills. Stable Diffusion is open source and has been trained on billions of images to understand concepts in prompts to it. Prototyperspective (talk) 11:41, 9 January 2024 (UTC)Reply[reply]
training on nonfree works is allowed Companies can train models on nonfree works all they want. That doesn't mean we should allow for images that are highly likely to be based on copyrighted works though. I'm not going to repeate myself, but see my reply to Robert Flogaus-Faust for why exactly I think it's such an issue. The gist of it though is that AI works are copyrighted when they are based on (or exact copies of) copyrighted works and we just have zero way of knowing when that the case because we don't have access to what images the models were trained on. So it's just as likely that a painting of a historical figure would be based on newer copyrighted works then older free licensed ones. If anything, there's more chance since there's less images of historical figures the further back you go. There's just no way of us knowing or checking regardless though. At least with normal artwork we know who created it, what it was inspired by, and where it came from. None of that is true with AI artwork. An image has no business being on Commons if there's no source or at least a description of what it's based on. Period.--Adamant1 (talk) 11:58, 9 January 2024 (UTC)Reply[reply]
They are not based on individual images with few exceptions that the link in the original post is about and that I addressed in my explanations. You also learn concept such as 'what is a rhinoceros' from your visual experience. Do you think if you never saw a real rhinoceros and all you ever saw was copyrighted films of such an image you created based on your knowledge gained through these films would be copyright violations? I don't need to clarify that they aren't since multiple entities have done so. As said, cases where it maliciously usually deliberately replicates some image should be deleted and are rare. Prototyperspective (talk) 12:22, 9 January 2024 (UTC)Reply[reply]
No offense, but your comparison of AI generators to humans and how they learn or create things is just a ridiculously bad faithed, dishonest way to frame the technology. It's also not a valid counter to anything I've said. We still require a source when someone uploads artwork created by a human and neither a prompt or what AI generator the image was created by qualifies as one. Period. --Adamant1 (talk) 12:40, 9 January 2024 (UTC)Reply[reply]
No, we don't list the visual experiences and inspirations and so on for artworks entirely made manually by humans. You seem to have bad faith against my explanations where "ridiculously bad faithed" doesn't even make sense. Just calling it "not a valid counter" isn't a good point. Prototyperspective (talk) 12:46, 9 January 2024 (UTC)Reply[reply]
@Prototyperspective: What have Midjourney and Dall-E been trained on, hmmm?   — 🇺🇦Jeff G. please ping or talk to me🇺🇦 12:03, 9 January 2024 (UTC)Reply[reply]
Also billions of images. Since you didn't address what I wrote about it I'll just quote it to avoid walls of text creating circular repetitions: training on nonfree works is allowed as much as you are allowed to view copyrighted images on artstation (or e.g. public exhibitions [public television etc etc]) and "learn" from them, such as getting inspiration and ideas or understanding specific art-styles. This is similar to human visual experience where anything you create is based on your prior experience which includes lots of copyrighted works. Various authoritative entities have clarified that… Prototyperspective (talk) 12:15, 9 January 2024 (UTC)Reply[reply]
The difference is that a normal user will be banned if they repetitiously create and upload derivative works. Yet, apparently, if an AI generator has a history of creating COPYVIO that's perfectly fine "because technology." It's really just glorified meat puppeting though and your only response seems to be acting like it's not an issue when there's a plethora of evidence to the contrary. --Adamant1 (talk) 12:22, 9 January 2024 (UTC)Reply[reply]
These are not derivative works and text2image generators who similar to humans learned concepts through visual learning do not produce copyright violations by default. You want to a novel art tool "because technology" and I explained why it's unreasonable and why nothing backs your unfounded conclusions while subject-level authoritative entities have clarified these are not copyvios. It's glorified avoidance of new technical capacities for no good reason. Prototyperspective (talk) 12:26, 9 January 2024 (UTC)Reply[reply]
Which images aren't derivatives? The ones in the article that Jeff linked to clearly are, and no one even ask them in that case. So you can stick your fingers in your ears about it, but AI generators clearly produce copyrighted works. And no I don't want to "ban a novel art tool because technology." I've multiple times that we should allow for AI generators that are trained on freely licensed images. So I'd appreciate it if you didn't misconstrue my position. Your the only one taking an extreme, all or nothing position on this. --Adamant1 (talk) 12:30, 9 January 2024 (UTC)Reply[reply]
we should allow for AI generators that are trained on freely licensed images Such in the sense of being useful are impossible and it will remain like that for a few a decades if not much longer. Which images aren't derivatives? Images made via Stable Diffusion, Midjourney & Co except for images like in the links which I addressed, not ignored, with such "malicious images are difficult to create, rare, and should simply be deleted". Prototyperspective (talk) 12:44, 9 January 2024 (UTC)Reply[reply]
I beg to differ. There's also iStock's AI generator. And your the one saying I don't understand or have experience with the technology. Regardless, both create perfectly good quality images that I assume would be safe to upload and I'm sure there's others. So it would be perfectly reasonable to only allow artwork from models that were trained on freely licensed images with where the technology is at right now. --Adamant1 (talk) 12:52, 9 January 2024 (UTC)Reply[reply]
Those are not freely licensed.
Not sure why you advocate for these commercial proprietary AI models. Stock images are usually not accurate and/or creative depictions of things either and details about NVIDIA Picasso remain unknown. Prototyperspective (talk) 13:00, 9 January 2024 (UTC)Reply[reply]
I don't care if the underlining technology is freely licensed. That's not the issue. If people can use the images without having to worry about violating someone else's copyright is and per Getty Images website images created with their software are "commercially‑safe—no intellectual property or name and likeness concerns, no training data concerns." Which is what's important here. Not if the underlining software is open source or whatever --Adamant1 (talk) 13:06, 9 January 2024 (UTC)Reply[reply]
The images trained on are not freely licensed. I do see how you don't care about open source but that isn't what I meant. --Prototyperspective (talk) 13:09, 9 January 2024 (UTC)Reply[reply]
That's not the point. Your just being obtuse. --Adamant1 (talk) 13:13, 9 January 2024 (UTC)Reply[reply]
 Oppose largely on the basis of terminology. "AI" is a marketing buzzword and not well-enough defined to make policy around. As Robert Flogaus-Faust mentions, there are plenty of things that are called "AI" that are fine for Commons, at least from a copyright perspective. --bjh21 (talk) 12:01, 9 January 2024 (UTC)Reply[reply]
@Bjh21 I mean generative AIs.   — 🇺🇦Jeff G. please ping or talk to me🇺🇦 12:07, 9 January 2024 (UTC)Reply[reply]
@Jeff G.: I think even that is probably too broad. For instance it would cover GPT-4 used for machine translation. --bjh21 (talk) 12:40, 9 January 2024 (UTC)Reply[reply]
@Bjh21: Translation starts with a source work of the same type as the output. By contrast, generative AIs (typically that are today creating medium-resolution images) don't start with a source image; or they start with many source images, some of which are non-free. They also are not notable artists.   — 🇺🇦Jeff G. please ping or talk to me🇺🇦 16:42, 10 January 2024 (UTC)Reply[reply]
@Jeff G.: I don't really understand this field, but en:Generative artificial intelligence defines generative AI as "artificial intelligence capable of generating text, images, or other media, using generative models," and mentions GPT-4 as an example (it even has the word in its name). en:Machine translation notes that "one can also directly prompt generative large language models like GPT to translate a text." This leads me to some concern that banning all output of generative AIs might exclude large classes of use that aren't problematic. But maybe machine translation by generative AI is problematic; I don't know. --bjh21 (talk) 17:25, 11 January 2024 (UTC)Reply[reply]
 Comment AI generated files need to uploaded as PD, as there no sweat of brow involved and all such services are trained on materials it has found on the internet. Either that all AI generated files are not allowed because the under lying source material isnt declared, we con only accept freely sourced, where those sources are provided materials. As for some minor editing tools to adjust colours, sharpen, or remove noise those types of adjustments have always been acceptable. Gnangarra 12:12, 9 January 2024 (UTC)Reply[reply]
The underlying source material are billions of images for txt2img; you want to have a sorted list of thousand–billions of images listed beneath each file? e.g. Stable Diffusion’s initial training was on low-resolution 256×256 images from LAION-2B-EN, a set of 2.3 billion English-captioned images from LAION-5B‘s full collection of 5.85 billion image-text pairs, as well as LAION-High-Resolution, another subset of LAION-5B with 170 million images greater than 1024×1024 resolution (downsampled to 512×512). Prototyperspective (talk) 12:29, 9 January 2024 (UTC)Reply[reply]
they are only numbers they generate only based on a smaller subset as picture of a cow has influence on a picture of a flower. Clearly our images must be honest products of photographers otherwise they serve no encyclopaedic/educational purpose about the subject. Diagrams have allways covered the gap photographs cant convey. Gnangarra 12:48, 9 January 2024 (UTC)Reply[reply]
Just because you can't think of other potential use-cases doesn't mean there aren't some. For example illustrating art styles. There are thousands and thousand of photos of images for whatever photographable thing you can think of but yet other subjects of human culture don't seem to be worthy of benefiting from novel technology at all. I put thousand–billions there instead of billions because the images have different degrees of relevance of the image. If you generated merely an image of a cow, which wouldn't be useful, then obviously the countless labelled photographs of cows would be most-relevant to the image. Prototyperspective (talk) 12:54, 9 January 2024 (UTC)Reply[reply]
This isnt about potential uses I can think, this is about the movements honesty and reliability the end user must be able to trust that what is available on every project is from a reliable source. There are many endeangered species, past wars, and deceased person where we dont have photographs of. When there is no photograph we should not dishonestly present such photographs as existing. Gnangarra 13:03, 9 January 2024 (UTC)Reply[reply]
Agree. That's not a case for banning AI images. Btw here is an AI image depicting the extinct dodo. Prototyperspective (talk) 13:06, 9 January 2024 (UTC)Reply[reply]
that image is false anyway as it doesnt show the birds colourings, nor depict it in its natural environement with plant species from its habitat. My point is that when we have reliable illustrations already including colour details we dont need these images anyway, if we do then these images mislead the viewer and make a mokery of everything we strive to do in being reliable trustworthy source. Gnangarra 13:15, 9 January 2024 (UTC)Reply[reply]
Inaccuracies should be pointed out and also occur for manually made images. Moreover, the images can be improved via new versions and the AI software can also improve over time. There are many files in Category:Inaccurate paleoart. Lastly, for many cases we don't have such images available, images being on WMC doesn't mean they need to be used, WMC is a free useful media repository while Wikipedia is the encyclopedia, and all of what you said isn't a case for banning but for properly describing and/or deleting various files. Prototyperspective (talk) 13:33, 9 January 2024 (UTC)Reply[reply]
If someone wants AI generated media then they will go to the AI service of their choosing and create as and when they need, logically it allows them to grab the most upto date reconning. Gnangarra 13:49, 9 January 2024 (UTC)Reply[reply]
Doesn't make sense. I don't think you have much experience with these tools beyond generating very simple images overly broadly. You wouldn't also say "ah people just make a new diagram about xyz when they need it so we don't need to host it and the same goes for artworks of e.g. cubism". There clearly is an anti-AI-tools bias with lots of unfounded dismissals. Prototyperspective (talk) 14:17, 9 January 2024 (UTC)Reply[reply]
I have yet to see or hear a legitimate use case for most, if not all, AI images despite all your capitulating about it other then the Wikibook specifically having to do with AI. That's not to say there isn't one, but arguments like "AI artwork is educational because AI artwork is educational" are just tautological. All your doing is talking in circles while claiming other people who disagree with you are bias once in a while. Same goes repeated instance to make this about other mediums of artwork. Apparently your incapable of talking about AI artwork without deflecting or trying to change the subject for some reason. Even though it's supposedly in-scope and there's no reason to ban it. I don't think people here who think it should be moderated aren't open to alternatives, but your clearly not making a case for them. Let alone have you even proposed any. All you've done is get in the way of there being any changes to how we handle AI artwork what-so-ever. Otherwise propose something instead of just getting in the way of everyone else who's trying to deal with the issue. --Adamant1 (talk) 14:45, 9 January 2024 (UTC)Reply[reply]
I explained specific use-cases and the wikibook is about explaining use-cases (see "applications" in the title). Probably last reply to you here but I'm not trying to change the subject for some reason like you accuse me to. As should be clear to people reading the discussion I'm always addressing specific points in a prior comment. Interesting that you dismiss all my points in comments like this where you alleging I'm doing nothing but calling people biased or circular reasoning. Prototyperspective (talk) 14:53, 9 January 2024 (UTC)Reply[reply]
You really haven't. I'm pretty sure I've said it already, but they all boil down to vague handwaving about use cases that either don't exist to begin with or no one is or will use the images for. Like your claim that an image was in scope because you could use it on your personal blog that you don't even have to begin and aren't using the image for regardless. Same goes for the Jeff Koons knock off image. You claimed it could be used in a Wikipedia article, but no one is using it for that and it would probably be removed if anyone added it to an article anyway. The "uses" have to at least be realistic and ones that people will actually use the images for. You can't just invent a random, unrealistic reason to keep an image and then act everyone else is just being bias or whatever when they tell you it's not legitimate. --Adamant1 (talk) 15:01, 9 January 2024 (UTC)Reply[reply]
@Gnangarra only A.I. art in countries that follow U.S. jurisprudence may be allowed to be hosted here. But not UK A.I. art: see this. JWilz12345 (Talk|Contrib's.) 12:32, 9 January 2024 (UTC)Reply[reply]
We decide Commons policy, the options are none, only if all sources are acknowledged, and only PD licenses. none these option override any US laws. The same way we apply precautionary principle, a person who generates and publishes on Commons which is in the US is subject solely to US laws. Gnangarra 12:45, 9 January 2024 (UTC)Reply[reply]
@Gnangarra that may be true, until a British A.I. artist files letter of complaint to Wikimedia. Files should also be free in the source country and not just U.S..
English Wikipedia can host unfree British A.I. art though as enwiki only follows U.S. laws. JWilz12345 (Talk|Contrib's.) 14:46, 9 January 2024 (UTC)Reply[reply]
For one, the "source country" (in the sense of the Berne Convention) of any work first published on the internet and accessible from any country in the world may be considered to be any country. Various US courts have found that simultaneous publication occurs when a work is published online, and thus that works first published online are US works for the purpose of copyright law.
But more generally, any instance in which Common goes above and beyond US law is up to the community. You could argue that Commons should treat this like PD-Art. D. Benjamin Miller (talk) 06:13, 3 February 2024 (UTC)Reply[reply]
  •  Oppose per the precedent that we allow a human artist to view, say, 5-10 copyrighted images of a person, and then draw a portrait of that person based on the likeness they have gleaned from those copyrighted images. A generative AI has seen far more images than that, and any copyrightable portion is likely to be heavily diluted, more so than the case of the human artist. Of course, individual generations can be nominated for deletion if a close match to a specific copyrighted image can be identified or if it is clearly a derivative work of a copyrighted subject. As for the objection "what if there's some image it's copying that we don't know about", the same objection applies for human artists: "what if the artist is not honest about their sources?" -- King of ♥ 17:39, 9 January 2024 (UTC)Reply[reply]
the same objection applies for human artists It could just be copium, but I feel like there's a difference of scale there that makes derivatives created by humans easier to sus out then it is for AI generated images. Since at the end of the day people are working extremely small data sets that usually relate to their specific area of interest. For instance if we are talking about someone who mainly speaks Mandarin Chinese and has a history of uploading images from China, it's a pretty good bet the image in question won't be a derivative of a 1940s American cartoon character. Or we can at least ask another user who has speak the languages and/or is from China if they have seen the character before. We can't do that with AI artwork though because the dataset is essentially every single image created in last 500 years. So sure, the same problem exists regardless, but it's the difference between looking through your junk drawer to find a key versus trying to find a grain of sand in the ocean. --Adamant1 (talk) 18:27, 9 January 2024 (UTC)Reply[reply]
Your argument essentially argues against itself. As you say, AI learning works from pretty much the sum total of human visual arts, and doesn’t even use any particular one of those at a time. It’s highly unlikely you’ll just randomly get a copyrighted character if you don’t ask for one. Dronebogus (talk) 01:49, 12 January 2024 (UTC)Reply[reply]
It’s highly unlikely you’ll just randomly get a copyrighted character if you don’t ask for one. @Dronebogus: I've use Dall-E to create portraits of women and every so often it will generate one of Scarlett Johansson, even though I don't explicitly ask for images of her. So I think it either has an algorithm that favors creating images based on popular characters or people, or it just happens to have been trained on images of female celebrities from the past 20 years more then anything else. So likenesses of Scarlett Johansson just get rendered more because of how the weighting in the training model works or something. Either way, if I can generated a couple thousands portraits where a none trivial number of them look living movie stars then I don't know the same wouldn't occur for modern movie or cartoon characters. I think it naturally follows that would be the case anyway because there's inherently more images of the Simpsons out there that it was trained on then say a cartoon like Mutt and Jeff. Same goes for it rendering images of women that look like Scarlet Johansson versus Carole Lombard or for that matter just a "random" woman. --Adamant1 (talk) 10:24, 12 January 2024 (UTC)Reply[reply]
If it’s super obvious then you filter it out as a copyvio. This isn’t difficult. Dronebogus (talk) 12:25, 12 January 2024 (UTC)Reply[reply]
  •  Oppose much too broad. This would mean we couldn't even have examples of AI-generated artwork. I suggest reading the section beginning "That said, there are good reasons to host certain classes of AI images on Commons" at Commons talk:AI-generated media. - Jmabel ! talk 19:38, 9 January 2024 (UTC)Reply[reply]
  •  Oppose (a) not all current and future models are trained with nonfree works; (b) not all models trained with nonfree works produce work that's legally considered derivative; (c) commons should follow, not lead when it comes to making decisions based on the law. Sometimes we understand the law and enact a policy that's more conservative, but in this case we'd be enacting a policy that's miles beyond any legal lines set thus far AFAIK. — Rhododendrites talk22:10, 9 January 2024 (UTC)Reply[reply]
  •  Oppose, the exclusion of AI-generated works should be done on a case-by-case basis, not as a blanket exclusion. Good illustrative educational works that are obviously in the public domain shouldn't be grouped together with AI-generated images of Sailor Moon, Optimus Prime, and Magneto. We should judge AI-generated works on a case-by-case basis. This is still largely unregulated and current United States legislation sees most AI-generated works as public domain, let's not be stricter than the law. Yes, we should be as cautious as possible, but that caution should not be applied this broad. --Donald Trung 『徵國單』 (No Fake News 💬) (WikiProject Numismatics 💴) (Articles 📚) 22:32, 9 January 2024 (UTC)Reply[reply]
  • I don't understand the proposition here. Training AIs on the content here is one issue, and I can see an argument based on 'ban non-licence observant AI training from our licensed content', difficult to implement as that might be.
However the solution here 'ban AI uploads' seems unrelated to that.
I would not (as yet) ban AI uploads. Maybe I could be convinced otherwise. But I do that that we should immediately (or ASAP) require all AI to be clearly tagged as such, and maybe its source identified. Whatever we decide in the future is going to be made much easier by doing that early on. Andy Dingley (talk) 17:15, 10 January 2024 (UTC)Reply[reply]
  •  Strong oppose I don’t even know where to begin with this. I think the fact that it’s based on a link to a single random article— not a strong legal basis, not extensive reliable sources, not even an argument from the proposer —is a good starting point. That and the fact that it’s based on an assumption that AI will always recognizably plagiarize a certain copyrighted work or works, rather than just pull from 90% of the Internet and overlap a billion similar works into a nonspecific whole. We’re putting the cart way, way before the horse here. Dronebogus (talk) 01:45, 12 January 2024 (UTC)Reply[reply]
  •  Support yep. Where should future AI get reliable stuff to learn from, if Commons is full of AI work itself ???? Alexpl (talk) 10:47, 12 January 2024 (UTC)Reply[reply]
    This is a reason for why I've been making sure that all images made using AI tools are in some subcategory of Category:AI-generated images. You can then easily exclude and maintain them.
    It's not "full of it" if we there's a few images in 100 million files of even the most mundane things photographed thousands of times. Outright banning is a knee-jerk simplistic reaction without much thought given to it, like banning images made or modified using Lightroom or Photoshop in 2003. Didn't know people here are so anti-(novel)technology and pro-indiscriminate tool-use/images exclusion. Prototyperspective (talk) 11:48, 12 January 2024 (UTC)Reply[reply]
    Frame it a few years in the future where AI image generators are common place. Realistically how many AI generated images being put in normal categories at that point would it take for it to become unmanageable and the project to lose all creditability as a source of accurate educational material? Because it just doesn't scale past a couple of enthusiasts who are willing to manage the images as part of personal pet project. The same can't be said for photographs that people made minor edits to in photoshop or whatever. The fact is that they just don't pose the same problems and the projects reputation will never be damaged (or at it's usefulness be rendered totally useless) by people touching up old photographs in light room like it could (and probably will be) by allowing for an infinite number of fake AI generated images of historical figures or whatever. --Adamant1 (talk) 12:48, 12 January 2024 (UTC)Reply[reply]
    They're already commonplace. That's just hypothetical speculation and still doesn't mean there aren't other better ways to deal with that. Wikimedia Commons is a repo for freely usable media files and there's lots of illustrations and artworks in it.
    For example simply don't add them to these categories, or only AI-specific subcategories. I don't see how these images could be considered "accurate educational material", especially in the categories these are showing under but that and many other images don't get outright banned or deleted (that they don't may be a good thing and there is a certain policy that often gets cited which I get the impression people assume only refers to subjects like nudity where some removals from a site are by far not as detrimental to society and free education than general-purpose tools and more socially-relevant subjects).
    The credibility is damaged by outright banning a useful general-purpose tool as well as by creating unwelcoming environments to AI developers and potential media uploaders along with undermining its reputation as being on the forefront of free speech and the creative commons – not indiscriminately censoring/excluding/however-you-call-it free media and being at the forefront of the public domain, not working against it and marginalizing new forms of art/creative-methodologies/technology. There also is the potential for infinite number of photographs of grass, trees, or tables but still we don't ban such; in fact I think there's few if any legal potentially useful media WMC outright bans. Prototyperspective (talk) 13:53, 12 January 2024 (UTC)Reply[reply]
That is not the job of Commons. We have nothing to win here and you´ll be, unfortunately, proven wrong in short time. No need to further elaborate the "state of the art" etc.pp. here. Alexpl (talk) 16:36, 16 January 2024 (UTC)Reply[reply]
  • We really need a “geekography test”— if pictures of naked women objectified as computer software is somehow in a million years “educational”, what isn’t? Dronebogus (talk) 14:06, 12 January 2024 (UTC)Reply[reply]
    I don't disagree with either one of you about the nude photos, but your comparing apples and oranges because I said "accurate educational material", not "eductional material." I'm sure you both get the difference. The problem with AI artwork is that it's inherently inaccurate due to the nature of the thing. So while it's "educational" in the sense of educating people about where the technology is at, it's not eductional in regards to the subjects that the images are reported to about. That doesn't go for nude women though, obviously. No one is going to mistake an image of a nude woman with a mushroom from Mario on it for a 15th century historical figure. Let alone put it in a category for one. Although I agree the former should also be dealt with, and it could at any point. But now it's way less likely the issues presented by AI artwork will be resolved because you've poisoned the well by going off about nude photos. --Adamant1 (talk) 15:17, 14 January 2024 (UTC)Reply[reply]
  •  Oppose It has been common knowledge that AI generators are trained on copyrighted works for years. Pretending it's some kind of "Gotcha" moment is quite frankly ridiculous--Trade (talk) 15:40, 15 January 2024 (UTC)Reply[reply]

Ban images generated with MidJourney[edit]

Counter proposal since the original doesn't seem to be going anywhere, but at least IMO there's still unique issues with images created by MidJourney that deserve scrutiny outside of the wider question of allowing for AI artwork on general or not.

Anyway, per Jeff G MidJourney has been shown to generate derivatives regardless of the prompt or if users asked for them. The creators of the software have also gone out of their to intentionally train the model on copyrighted meterial. Regardless of if it leads to images that violate copyright. This leds to two issues:

1. There's a less then trivial chance that whatever images are generated by MidJourney will be copyright violations and there's no easy way know which are or not due to the nature of the thing. Let alone is it something that can be easily policed at any kind of scale. Especially without any kind of guideline in place making it so ghe images can be speedy deleted or otherwise fast tracked to deletion. This issue will also only get worse and harder to deal wity if MidJourney is ever found liable in court for violating copyright. Its much harder to deal with potential copyright violations after the fact.

2. The way MidJourney is maintained in regards to the utter lack of respect for other people's intelectual property clearly goes against the goals of the project and wider movement.

Although admittedly both can be said for other AI generators they clear aren't as brazen or problematic in other cases as they are with MidJourney. So I think it warrants a separate solution. Also, in-case anyone is going to claim we don't ban software, yes we do. MP3s and MP4s being the ones that come to mind, but I'm sure there's others. And sure its for different reasons, but this still wouldn't be unique regardless.

Also an exception to the proposal will be made in cases where the image or images are being used to illustrate MidJourney itself. Although with the caveat that it shouldn't be used in a bad faithed way to game the system.

--Adamant1 (talk) 16:23, 15 January 2024 (UTC)Reply[reply]

Still  Oppose, because a) it hasn’t been found guilty of copyright violation, b) we still need to illustrate MidJourney itself, c) you still need to prove the number of potential copyright violations goes beyond “non-trivial” into a plurality or majority. A “non-trivial” number of human uploads turn out to be copyvios, but we don’t ban humans uploading because most of them aren’t. Dronebogus (talk) 18:47, 15 January 2024 (UTC)Reply[reply]
@Dronebogus: I doubt it would make a difference, but I'm more then willing to modify the proposal to have an exception for images that illustrate MidJourney itself if you want me to. Really, I assumed it would be a given. Apparently not though. --Adamant1 (talk) 19:09, 15 January 2024 (UTC)Reply[reply]
“Ban x” usually doesn’t imply exceptions Dronebogus (talk) 19:10, 15 January 2024 (UTC)Reply[reply]
I would say it does if the ban is for "reason X" and that reason wouldn't apply to the exception. We'll have to agree to disagree though. Regardless, I added it to the proposal so it's explicit. --Adamant1 (talk) 19:16, 15 January 2024 (UTC)Reply[reply]
 Oppose For the same reasons as before. Will this ever stop and aren't indiscriminate DRs against useful AI images that are often the only ones available for multiple notable subjects enough?
It's a bad idea and a precedent not in line with the prior advocacy for free speech to ban image creation tools, this applies to Photoshop as much as to Midjourney. There is a less than trivial chance photographs or paintings are derivative works, movie stills, or similar – do we ban them all now too? It wouldn't be harder to deal with it wasn't banned but despite your speculations, Midjourney won't be liable for generally violating copyright in regards to its images which would go all that has been said and decided previously. Machines are allowed to learn from publicly visible media as much as humans are allowed to learn from them; these tools are a great boon to the public domain and are general-purpose tools that are and will be used for pretty much everything which is what WMC would ban while considering itself as some kind of pro public domain platform.
MP3s are not software but media formats. If more is done, it shouldn't be a ban. The problems you think are exclusive to AI tools and which so far have not really manifested on WMC are much broader and concern all kinds of images where things like tineye bots or reports of checked categories that are most likely to receive derivative works would be useful. Prototyperspective (talk) 22:13, 15 January 2024 (UTC)Reply[reply]
despite your speculations, Midjourney won't be liable for generally violating copyright in regards to its images Not that I think you care since you can't seem to go one discussion related to this without claiming I'm making up things or don't know what I'm talking about, but I didn't just come up with that out of thin air. Legal experts seem to agree that MidJourney will probably be held liable for violating copyright with at least one of the many legal cases they are currently facing and/or will likely face in the future. Of course we will have to see if they are, but we have something called the "precautionary principle" for a reason. All we need is reasonable doubt to the copyright status of something, and I think that's been more then met when it comes artwork created MidJourney. We also defer to what legal experts have to say about a particular topic when deciding guidelines. Whatever helps you cope though. At least I'm proposing something that isn't just banning AI artwork outright, which was supposedly your whole problem with this to begin with. --Adamant1 (talk) 22:55, 15 January 2024 (UTC)Reply[reply]
One, you need to stop being so condescending towards Prototyperspective. Two, even if MidJourney are found liable for copyright infringement, there’s no need to ban their output right now. Or even at all. They’ll probably work to remedy this rather than throw their hands up and say “guess we’re done here, sorry folks”. Then only images up to that point would need to be deleted. Dronebogus (talk) 17:51, 16 January 2024 (UTC)Reply[reply]
First of all, Prototyperspective has a long well established history of misrepresenting my position and treating me like I'm making up things or don't know about the subject. So if anyone is being condescending they are. Secondly, it doesn't like MidJourney wants or has the ability to remedy things on their end since they intentionally trained the model on a large amount of copyrighted artwork and MidJourney creates duratives regardless of the prompt. There probably isn't really a way to "remedy" that outside of re-training it or completely starting over. Neither of which I think they are going to do. They can and have disabled certain keywords that lead to it generated copyrighted images, but it's not like we can realistically just delete images on the fly up to that point every time they patch or tweak something. --Adamant1 (talk) 18:18, 16 January 2024 (UTC)Reply[reply]
I wasn’t even suggesting that; if there was a major attempt to remove copyrighted material, then we can delete everything up to that point, not “every time they update we delete everything”. But I understand you will absolutely never budge on this or anything else related to AI; if you’ve no intention of reconsidering, ever, then please stop responding in order to argue for the sake of arguing. Dronebogus (talk) 01:54, 18 January 2024 (UTC)Reply[reply]
@Dronebogus: I think I budged when I proposed this as an alternative to a complete ban. I've also said a couple of times now that I support AI generated artwork that was used on other projects and/or created with models that were trained on freely licensed images, of which there are currently several. It seems like both you and Prototyperspective have a real problem with listening though since both of you seem hell bent on treating me like I'm some kind of hard line hater of AI or something when I'm not. Your the ones who aren't willing to budge though. Otherwise you would have supported this, or at least proposed something else instead of just making it about me. I have zero problem with setting some kind of reasonable standard for what type of artwork to include and what not to. You won't do that though. --Adamant1 (talk) 02:08, 18 January 2024 (UTC)Reply[reply]

refering to realistic in AI categories[edit]

see Category:Realistic animals by DALL-E that by refering to these files as being realistic is a falsehood which damages Commons and the wider movements reputation of realiability, accuracy, and trustworthy source. The files should not be udentified as such. Suggest that the should clear distinguish themselves as machines(AI if yuo wish) from both photographers and illustrators. Propose that all DALL-E categories are styalised as Machine Generated(AI) illustrations by DALL-E in the specific example it would become Category:Machine Generated(AI) animal illustrations by DALL-E the veiwer can decide on whether they consider it by the peacock term "realistic"Gnangarra 13:27, 9 January 2024 (UTC)Reply[reply]

This category is exactly so that you can move images that have inaccuracies / are unrealistic out of the category. You misunderstood the point of it. Moreover, you're confusing WMC with Wikipedia and ignore file descriptions and file titles. Also see Category:Inaccurate paleoart. The proposal regarding category renaming could be reasonable but I'd suggest that it's discussed via cat-for-discussion procedures and in a way where the title matches the contents. I've always argued (mainly in this context) that titles, categories, and file descriptions should match the actual file contents so renaming such categories may be something I'd support. Prototyperspective (talk) 13:36, 9 January 2024 (UTC)Reply[reply]
this not just renaming a single or associated group of categories, specifically this is about setting a policy for such styles including the removal of peacock/suggestive terminology that can mislead those searching media files. DALL-E is just the example. Gnangarra 13:44, 9 January 2024 (UTC)Reply[reply]
I'm also concerned about misleading search results but less so when it comes to clearly labelled AI art than lets say unexpected animations of people dying and porn images. I can't really understand why people are so worried about clearly labelled AI images showing up in search results in a relative sense. Still, I support making things clearer. Particularly one thing I suggested was having a tag note in the corner of an image or appended to file titles that e.g. says "[Made using AI]"; something similar could be done for categories but seems to already be the case in your examples (I addressed further things above). Prototyperspective (talk) 14:21, 9 January 2024 (UTC)Reply[reply]

Strongly oppose "realistic" in the names of categories that users can freely add. It involves a judgement call. If we were to allow this, it should involve at least the level of rigor we bring to judging Quality Images. - Jmabel ! talk 19:41, 9 January 2024 (UTC)Reply[reply]

There is Category:Inaccurate paleoart; for images made involving AI tools I thought it would be best if inaccuracy is assumed and the default.
Thought the cat would be useful and don't really care about it even though I don't understand why people such strong feelings and concerns about AI images particular but not other respectively comparable issues. Just nominate the cat for deletion. I thought having a way to separate lets say File:Parrot in Peaky Blinders style.png and File:Capybara espacial.jpg, from images aiming or achieving to be realistic depictions like File:Polygon illustration of a dog.png, File:Monkey in watercolour.png and File:Ai Generated Images Tiger.png would be useful (and possibly needed so that one assume inaccuracy if the cat is missing and easily find images that are more realistic or unrealistic). Again I'd suggest just making a CatForDeletion/Discussion post and I don't care what happens to the cat if people don't see usefulness in this distinction. Prototyperspective (talk) 21:59, 9 January 2024 (UTC)Reply[reply]

Next and previous in series links[edit]

Let's say we are looking at number 06 in an automatically numbered series. Well there should be links to 05 and 07 on it, so we don't need to go back to an index page to see the next one.

No, I'm not saying the uploader person should remember to make the links.

I'm saying the upload creation process, where the 01 02 03 are assigned, should make the links.

And in fact they need to be made for all already existing series too...

And perhaps have all the links, 01 02 03... on all the pages, so one can jump around, not just to the next and previous.

Yes, I know one can manually edit the url in one's browser's omnibar. But that is so old fashioned.

Jidanni (talk) 13:24, 11 January 2024 (UTC)Reply[reply]

Perhaps something like this should be available as an option, but it should absolutely not be assumed automatically from file naming. I routinely use a number on the end to distinguish photos I took of the same subject, but it is very rare when they are intended as this sort of sequence. - Jmabel ! talk 16:32, 11 January 2024 (UTC)Reply[reply]
In principle, a good idea, but should not be automatic, because (like Jmabel said above) often numbers are merely used to differentiate photos, not to imply a sequence. I support this idea for a new optional tool... --P 1 9 9   16:44, 11 January 2024 (UTC)Reply[reply]
When it's autonumbered, for example when uploader gives just one name for the batch, then yes the sequence links can be inserted by default as part of the upload wizard's autonumbering process. When the uploader provides separate numbers, then there's no autonumbering thus should be no automatic sequence links. Jim.henderson (talk) 06:36, 14 January 2024 (UTC)Reply[reply]

Retiring License template tag[edit]

In 2011 I created {{License template tag}} template, an empty template which is added to 5 license layout templates and transcluded in almost all Commons files. This tag template was essential in creation of SQL queries for files missing a link to this tag which usually means that they are missing any license. Some years latter Extension:CommonsMetadata was created that adds Category:Files with no machine-readable license to files without license. I am no longer using {{License template tag}} template and I do not think it is needed anymore. At the same time there is an issue with Commons database growing way too fast (see phabricator:T343131) and this template contributes to this issue. I would like to propose to stop using this template, however I am not sure if others do not use it for something. Jarekt (talk) 17:56, 21 January 2024 (UTC)Reply[reply]

 Oppose. User:AntiCompositeBot's NoLicense task uses {{License template tag}} to check for license templates, because the CommonsMetadata category was not reliable enough to detect all license templates. It's also not possible to replace it with a search query because of the number and complexity of primary and secondary license templates. AntiCompositeNumber (talk) 19:24, 21 January 2024 (UTC)Reply[reply]
https://commons.wikimedia.org/w/index.php?search=hastemplate%3A%22License_template_tag%22%20incategory%3AFiles_with_no_machine%2Dreadable_license&title=Special%3ASearch&ns0=1&ns6=1&ns12=1&ns14=1&ns100=1&ns106=1 says there's at least 800 files with the template in the category. AntiCompositeNumber (talk) 19:39, 21 January 2024 (UTC)Reply[reply]
AntiCompositeNumber I am glad I asked. If this template is used than we should keep it. --Jarekt (talk) 20:04, 21 January 2024 (UTC)Reply[reply]

Per AntiCompositeNumber reply I would like to withdraw my proposal. --Jarekt (talk) 20:06, 21 January 2024 (UTC)Reply[reply]

Unresolve. Most of such results are error that should be fixed and I have reduced the number of results from 800 to 120.--GZWDer (talk) 23:04, 21 January 2024 (UTC)Reply[reply]
Other than one file I tagged no permission, only one file left in search result: File:GFDL (English).ogg.--GZWDer (talk) 14:36, 4 February 2024 (UTC)Reply[reply]

New protection group for autopatrollers[edit]

Commons has long needed a protection group similar to the English Wikipedia Extended Confirmed Protection. However, the difference between Commons and a regular wiki is that with a regular wiki, one can assume a user is competent after 500 edits and 30 days, but with Commons the copyright system and licencing is so complex a manual review would be needed, which is what autopatrolled is. This is why I'm not proposing a simple 30/500 or similar protection.

That being said, this abscence of a "middle" protection has led to the increasing use of template protection and full protection as a "solution" for files with edit wars and LTAs attacking. Just look at the lists at [2] and [3]. For example, this file had to be template protected due to an LTA and the abscence of a "middle" protection.

However, template protection is simply too much for most scenarios. Not only is it only meant to be used for templates, but there are only 49 template editors plus the 187 admins, which is simply inadequate. And I doubt I need to mention the issues with fully protecting pages indefinitely. By contrast there are 7323 autopatrollers, 640 patrollers, and 325 license reviewers as of writing, many more active users.

Hence I propose a protection group for autopatrollers. Thank you, —Matrix(!) {user - talk? - useless contributions} 18:09, 23 January 2024 (UTC)Reply[reply]

Votes and discussion[edit]

Creating a new shackle[edit]

Well, there seems to be clear consensus for this protection group. I'll link to some possible shackles below to use as an icon, but feel free to add your own below: —Matrix(!) {user - talk? - useless contributions} 15:27, 3 February 2024 (UTC)Reply[reply]


Votes and discussion[edit]

Ideas wanted to tackle Freedom of Panorama issue[edit]

Hello all! We are looking for ideas to tackle the problem of media deleted because of Freedom of Panorama-related issues, and we're looking especially for admins and people who are knowledgeable in this issue to intervene. If you are interested, please join the discussion. Thanks in advance! Sannita (WMF) (talk) 17:03, 29 January 2024 (UTC)Reply[reply]

Require community consensus for new non-copyright restriction templates[edit]

There are many templates for non-copyright restriction (see Category:Non-copyright restriction templates) many of them like {{Personality rights}} or {{Trademarked}} are useful as they are valid in all jurisdictions. But in the last years many templates where created to warn about the usage of a file in some autocratic countries like {{Chinese sensitive content}}, {{Zionist symbol}} or {{LGBT symbol}}. These templates where created by single users without prior discussion and are added randomly to files.

This should be restricted. If we create a template for every restriction in some or even only one autocratic country we would end up with a long list of warning templates on ever file page. The Commons:General disclaimer linked on every page is totally sufficient.

Therefore I propose that new non-copyright restriction templates need to become approved by the community by proposing them on this board. This does not apply to minor variations of templates like {{Personality rights}}. The decision to keep or delete the templates created before this proposal should be achieved in regular deletion requests.

As a rough guideline for the approval of new templates I would propose that templates for countries with en:World Press Freedom Index lower than 70 should generally not be created. Exceptions are possible in both directions with templates for regions with less press freedom to be created or with templates not to be created for regions with a good press freedom situation. If created the templates needs a proper definition when and how to use them. GPSLeo (talk) 09:22, 3 February 2024 (UTC)Reply[reply]

70 on the World Press Freedom Index may be a bit too high. I see, for example, that Romania is just under that, but I'd think that their restriction on images of embassies is unusual enough that we might want a template for that. - Jmabel ! talk 01:55, 4 February 2024 (UTC)Reply[reply]
70 is ridiculously too high— that’s like most of the world outside of Western Europe, Oceania and upper North America. Under 40 would be more reasonable Dronebogus (talk) 02:38, 4 February 2024 (UTC)Reply[reply]
We could also remove this rough guideline and just say that the templates have to be approved without any further guideline when to create such guidelines. Also for countries with a good press freedom situation we should not create a template for every restriction in these countries. GPSLeo (talk) 07:30, 4 February 2024 (UTC)Reply[reply]
Is WMC even available in mainland china? Dronebogus (talk) 02:31, 4 February 2024 (UTC)Reply[reply]
@Dronebogus: From what I have heard, not technically, but it can be accessed by those with local or global ip block exemptions and access to proxies. See also w:Wikipedia:Advice to users using Tor to bypass the Great Firewall.   — 🇺🇦Jeff G. please ping or talk to me🇺🇦 02:45, 4 February 2024 (UTC)Reply[reply]
If it’s de jure illegal in the PRC then we shouldn’t consider their laws in regards to anything we do. It’s like a speakeasy warning people about the no smoking ordinance. Dronebogus (talk) 02:47, 4 February 2024 (UTC)Reply[reply]