Commons:Batch uploading

From Wikimedia Commons, the free media repository
Jump to navigation Jump to search
Shortcut
Request a batch upload Current requests Past batch uploads Failed batch uploads

Commons Batch Uploading is a project to centralize the uploading of a collection of files, that have released their work as PD or any Commons compatible license. The files would be assigned to a bot operator who would see how the request would be fulfilled.

Before you request a batch upload here, please read the guide to batch uploading first.

See w:Wikipedia:Public domain image resources for potential future batch uploads.

Related project: Commons:Library back up project aims to upload books in public domain from libraries of all languages.

Requests

Deepin icons[edit]

Deepin's icons.

Source to upload from[edit]

https://github.com/linuxdeepin/deepin-icon-theme

License[edit]

GNU head This work is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 3 of the License, or any later version. This work is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See version 3 of the GNU General Public License for more details.

Description[edit]

  • Do the media URLs follow a pattern?
Sure do. Example: https://github.com/linuxdeepin/deepin-icon-theme/blob/master/Sea/apps/scalable/accessories-text-editor.svg
  • Does the site have an API?
Yes.
  • What else could ease uploading?
Not sure.
  • Did you contact the site owner?
Nope.
  • Is there a template that could be used on the file description pages, or should one be created?
User:Psiĥedelisto/Deepin icons

Psiĥedelisto (talkcontribs) please always ping! 18:51, 3 July 2023 (UTC)Reply[reply]

Opinions[edit]

Assigned to Progress Bot name Category
 Half done Category:Deepin Icon Theme

@Psiĥedelisto: Hi! I looked deeper into this and uploaded the majority of the icons. Unfortunately, some icons are covered by copyright, as they are derivative work. --PantheraLeo1359531 😺 (talk) 19:17, 20 January 2024 (UTC)Reply[reply]

IBM Research on YouTube[edit]

Source to upload from[edit]

https://www.youtube.com/@ibmresearch/videos

License[edit]

Virtually all uploads to the IBM Research channel are licensed under the Creative Commons Attribution 3.0 Unported license, per the License tag in the description of each video.

Description[edit]

784 videos (and counting, as of the time I'm writing this) of pure IBM and technology-related gold. Lots of great photography and headshots to extract from these. Some of the content therein may contain non-free elements over the de minimis threshold, but from what I've watched so far those are few and far in between. Would be trivial to download all videos using youtube-dl; reencoding each video to fit within the 100 MB upload limit is a different story however.

DigitalIceAge (talk) 04:52, 17 November 2023 (UTC)Reply[reply]

Opinions[edit]

Assigned to Progress Bot name Category

Newspapers by Feureau in Internet Archive[edit]

Source to upload from


License
  • Public domain


Description
  • This user has uploaded more than a thousand old newspapers from Indonesia, as well as Dutch magazine Tong Tong.

Please help importing the newspaper here, they would make a great addition to Category:Newspapers_of_Indonesia

Bennylin (yes?) 18:29, 18 February 2023 (UTC)Reply[reply]

Opinions[edit]

Assigned to Progress Bot name Category

Babad Diponegoro from Internet Archive[edit]

Source to upload from
License

Public Domain

Description

The IA didn't provide DJVU nor PDF format, only zipped JPGS (1429 files and 1303 files) Bennylin (yes?) 06:57, 15 February 2023 (UTC)Reply[reply]

I found another file, same, no pdf/djvu

Opinions[edit]

Assigned to Progress Bot name Category

VK Icons[edit]

Source to upload from https://github.com/VKCOM/icons/tree/master/src/svg


License MIT


Description Examples of uploaded files Category:VK Icons


Артём 13327 (talk) 17:44, 21 October 2022 (UTC)Reply[reply]

Opinions[edit]

Assigned to Progress Bot name Category

fluentui-emoji[edit]

Source to upload from https://github.com/microsoft/fluentui-emoji/tree/main/assets


License MIT


Description


Артём 13327 (talk) 17:41, 21 October 2022 (UTC)Reply[reply]

Opinions[edit]

Assigned to Progress Bot name Category

CC0 ant micro-CTs[edit]

X-ray microtomograms of ants

Source to upload from

Dryad Subject Area: cybertype - Blacklight Search Results

License

Creative Commons CC0 License (Q6938433)

Description

Arlo James Barnes 23:01, 10 June 2022 (UTC)Reply[reply]

Opinions[edit]

Assigned to Progress Bot name Category

Denkmalatlas Niedersachsen[edit]

Images of cultural monuments in Lower-Saxony, Germany.

Source to upload from

https://denkmalatlas.niedersachsen.de/viewer/

License

CC BY-SA 4.0

Description

Images of cultural monuments in Lower-Saxony, Germany, from the "Denkmalatlas Niedersachsen" project of the Lower Saxony State Office for Heritage Conservation. The project offers exterior shots of the monuments. Photos shot from public space are permitted in accordance with the freedom of panorama in Germany. For published photos shot on private property, the State Office has the consent of the property owner. In the "Denkmalatlas", all photos are published with the license CC BY-SA 4.0.

Timk70 (talk) 16:00, 10 June 2022 (UTC)Reply[reply]

Opinions[edit]

Assigned to Progress Bot name Category

Bull of Heaven[edit]

Source to upload from

https://archive.org/details/BullOfHeaven


License

Most of these are in the public domain, but a few are non-commercial.


Description

The majority of these files are audio files to be uploaded to Category:Audio files of music by Bull of Heaven or its subcategory Category:Roman Numeral series, and the ones with three-digit numbers in front of them will have their titles formatted like the examples already in that category. However, it should be noted that Bull of Heaven is a very avant-garde band, and so, not all of their releases will have a single OGG file that contains all of the music for that release. In that case, I'll be skipping it. Let me know if you have any questions.

Lizardcreator (talk) 21:23, 27 May 2022 (UTC)Reply[reply]

Opinions[edit]

Assigned to Progress Bot name Category

Editora Fi[edit]

Source to upload from

https://www.editorafi.org/catalogo

License

CC BY-SA 4.0

Description

Open Access/CC books perfect for Wikisource. I believe that it is everything on Google Drive. It needs an specific template. Erick Soares3 (talk) 12:36, 9 March 2022 (UTC)Reply[reply]

Opinions[edit]

Assigned to Progress Bot name Category

SciELO Books[edit]

Source to upload from
http://books.scielo.org/
https://archive.org/details/scielobooks
https://archive.org/details/@scielo_books
License

Several types of Creative Commons (including non-commercial) and Public Domain.

Description

SciELO Books, part of Scielo Brazil (also an amazing source for Wikisource), have a partnership with several academic publishers to release or re-release their works on Open Access, be CC or Public Domain.

Since it is clearly legal, it should be an amazing resource for Wikisource and the Wikimedia in general.

The bot should be able to read the archive and select the ones with Wiki Commons friendly licenses. Internet Archive some works released as CC BY-4.0 are registered as non-commercial (example). A similar thing also happens on the main website: 1 and 2. Would be nice if the bot could compare the main website and the Internet Archive collection for missing files and check at least once a month for new works released into Wiki friendly licenses.

It is necessary an official template. Thanks, Erick Soares3 (talk) 20:38, 6 March 2022 (UTC)Reply[reply]

Opinions[edit]

Assigned to Progress Bot name Category

Official Journal of the European Union[edit]

Source to upload from

Eurlex: https://eur-lex.europa.eu/oj/direct-access.html

License

{{PD-EUGov}} for the EU itself and {{PD-EdictGov}} for the US side. Admittedly, I'm not sure how this will work for issues before certain dates (eg. before the act of 2011 mentioned in {{European Union Government}}) or prior to the EU's existence (i.e. during the time of the European Coal and Steel Community). However, due to the content being official legislation and communication I think it should be OK.

Description

Describe the content to be uploaded in detail (audio files, images by …), and what makes it valuable to Wikimedia Commons.

The files will be PDF copies of all issues of the Official Journal of the European Union (OJEU), the official gazette of the EU. Thus, it would be a useful resource for EU legislation and communications.


MSG17 (talk) 14:47, 8 February 2022 (UTC)Reply[reply]

Opinions[edit]

Assigned to Progress Bot name Category

Biblioteca Digital Hispánica[edit]

  • Source to upload from: Photography collection from the Biblioteca Digital Hispánica: search query
    • Do the media URLs follow a pattern? metadata permalink, viewer permalink, JPEG deep link
    • Does the site have an API? No
    • What else could ease uploading? (is the site valid XHTML, do they use a WCM…?) The HTML is quite well-formed and follows an homogeneous structure, although metadata tabulation is a bit weird.
    • Did you contact the site owner? No
  • Describe the works to be uploaded in detail (audio files, images by …): This request is for a subset of this Digital Library covering photographies and engravings. Note that the JPEG deep link provided above is valid only to fetch the first page of the document. For this collection, most (all?) works are a single page.
  • Which license tag(s) should be applied? It depends on the work. I think it should generally be PD-old-assumed, and in some cases PD-old-70 and PD-old-100.

I already have a scraper and (work in progress) page generator for this collection. So I can help to provide everything in the required format. Anyway, I think the bulk of pending work is probably identifying author and the right license tag for each work.

MarioGom (talk) 21:15, 12 October 2020 (UTC)Reply[reply]

Opinions[edit]

Assigned to Progress Bot name Category

Perry–Castañeda Library Map Collection[edit]

  • Source to upload from: http://legacy.lib.utexas.edu/maps/ams/
    • Do the media URLs follow a pattern?
      The urls themselves, so far as I can work out, don't, but in the same way as in Adobe Acrobat Pro you can set it to go down a list of web links to generate a single pdf, a bot may be able to too
    • Does the site have an API?
      Bit technical, but I dont' think so
    • What else could ease uploading? (is the site valid XHTML, do they use a WCM…?)
      I don't know
    • Did you contact the site owner?
      No
  • Describe the works to be uploaded in detail (audio files, images by …): vast series of maps generated by the US Army Map Service (i.e., PD-USGov-Military) in the Perry–Castañeda Library Map Collection, The University of Texas at Austin
  • Which license tag(s) should be applied? PD-USGov-Military
  • Is there a template that could be used on the file description pages? Do you think a special template should be created? In terms of the file naming convention, this could follow that of the site, i.e, the top of each page has the series, a credit to the US AMS, and the date, then each map file has the name of the map, the sheet number (for the index pages, cross-references from adjoining maps etc), and the scale

NB there are already some files at Category:India maps by U.S. Army Map Service (plus various other individual uploads etc within Category:Maps by the United States Army Map Service), and it looks from below on this page and eg this commons image that "Slick-o-bot" may have been used in 2012 to upload some or all of these (I'm most keen on the various Japan-related maps (especially the 3x Honshu 1:50,000 series) but imagine every region would benefit).
This would be a mind-bogglingly great addition, thank you, Maculosae tegmine lyncis (talk) 14:08, 13 August 2020 (UTC)Reply[reply]

PS, these are much more detailed than google maps - and the labelling is in English (with some Japanese too), Maculosae tegmine lyncis (talk) 19:27, 21 August 2020 (UTC)Reply[reply]

Opinions[edit]

Assigned to Progress Bot name Category

Claremont Colleges Digital Library[edit]

  • Describe the works to be uploaded in detail (audio files, images by …):

All photos in the Boynton Collection of Early Claremont, all of which are dated prior to 1925. If it's not too much trouble, it would also be very nice to have all photos in the Claremont Colleges Photo Archive and City of Claremont History Collection dated prior to 1925.

  • Is there a template that could be used on the file description pages? Do you think a special template should be created? Not sure

Sdkb (talk) 07:42, 8 August 2020 (UTC)Reply[reply]

Opinions[edit]

Were these photos published prior to 1925, or merely taken prior to then? Publication needs to be pre-1925 for {{PD-US-expired}} to be allowed. Pi.1415926535 (talk) 08:25, 8 August 2020 (UTC)Reply[reply]

@Pi.1415926535: The about page states The collection ... is believed to have come to Pomona College included with the papers of Charles Luther Boynton, a Pomona College alumnus and missionary to China. Boynton himself graduated from Pomona around 1900. I can't say for sure the year his papers came into possession of the college, though (which I assume would be the date of publication?). The library would probably tell us if we asked, though. Sdkb (talk) 05:47, 10 August 2020 (UTC)Reply[reply]
Acquisition by the college would not be considered publication for the purposes of copyright. Only use in a publicly released printed material, or on a webpage, is considered publication. Pi.1415926535 (talk) 06:48, 10 August 2020 (UTC)Reply[reply]
@Pi.1415926535: does being added to a library not count as publication? The collection has presumably been housed in the special collections department and publicly available to anyone who requested access since it was obtained. Sdkb (talk) 20:55, 10 August 2020 (UTC)Reply[reply]
A collection merely being in a library does not constitute publication, by my reading. Under copyright law, publication is the distribution of copies or phonorecords of a work to the public by sale or other transfer of ownership or by rental, lease, or lending. Offering to distribute copies or phonorecords to a group of people for purposes of further distribution, public performance, or public display also constitutes publication. (From here.) Is the death date of Boynton known? If it was before 1950, then {{PD-old-70}} applies. Pi.1415926535 (talk) 23:14, 10 August 2020 (UTC)Reply[reply]
@Pi.1415926535: According to here, Boynton died in 1961, so not quite. The above would seem to me to indicate being in a library counts, though, because of lending, which is what a library does. Sdkb (talk) 19:11, 11 August 2020 (UTC)Reply[reply]
A collection in the library would be the originals (not copies) and is likely for use only in the library (not lending). I understand that you wish to have this collection available on Commons, but from the available evidence I do not believe the images are public domain. Pi.1415926535 (talk) 20:58, 11 August 2020 (UTC)Reply[reply]
Assigned to Progress Bot name Category

Balinese Lontar from Internet Archive[edit]

  • Source to upload from: http://archive.org/details/Bali
    • Do the media URLs follow a pattern? yes
    • Does the site have an API? yes
    • What else could ease uploading? (is the site valid XHTML, do they use a WCM…?) N/A
    • Did you contact the site owner? yes
  • Describe the works to be uploaded in detail (audio files, images by …):
    • Balinese Lontar (palm-leaf manuscripts) from the Internet Archive's Bali collection
    • Each manuscript is a PDF containing photographs of the originals
    • This batch upload is in connection with an active project grant.
  • Which license tag(s) should be applied?

{{PD-scan}}, following the behavior of the ia-upload tool.

  • Is there a template that could be used on the file description pages? Do you think a special template should be created?

Yes. I will follow the ia-upload template closely when doing the batch upload. I will use a short python script that aggregates info from the Internet Archive API and sends each upload request via pywikibot. If necessary I will create a bot account for this purpose. There are approximately 2700 items to upload.

Lautgesetz (talk) 01:03, 4 July 2020 (UTC)Reply[reply]

Opinions[edit]

Assigned to Progress Bot name Category

Catalog of Copyright Entries[edit]

  • Source to upload from: https://archive.org/details/copyrightrecords?&sort=-date
    • Do the media URLs follow a pattern? Unsure.
    • Does the site have an API? Unusre, but there seems to be an RSS feed - Not sure if it contains all entries.
    • What else could ease uploading? (is the site valid XHTML, do they use a WCM…?)

Commons has tools for upload transfer from IA.

    • Did you contact the site owner?

No.

  • Describe the works to be uploaded in detail (audio files, images by …):

Scanned volumes (647) consisting of the Catlog of Copyright Entries volumes for the United States for the period 1891-1977/8)

  • Which license tag(s) should be applied?

{{PD-USgov}}

  • Is there a template that could be used on the file description pages? Do you think a special template should be created?

No new templates are required, additional fields could be added in {{Book}} or {{Information}}

ShakespeareFan00 (talk) 07:37, 3 June 2020 (UTC)Reply[reply]

Opinions[edit]

Assigned to Progress Bot name Category
Completed Category:Catalogs of Copyright Entries

Commons:Batch uploading/Modern Sketch[edit]

  • Source to upload from: This Complete Gallery
    • Do the media URLs follow a pattern? There are 39 links. Inside of each link there are all the pages of every issue, in order.
    • Does the site have an API? I don't know
    • What else could ease uploading? (is the site valid XHTML, do they use a WCM…?)
    • Did you contact the site owner? It's Public Domain
  • Describe the works to be uploaded in detail (audio files, images by …):

Each one of the 39 issues of Chinese magazine "Modern Sketch". They are in public domain for the reasons given in the following parametre. All the pages can be uploaded.

  • Which license tag(s) should be applied?

PD-China and PD-1996

  • Is there a template that could be used on the file description pages? Do you think a special template should be created?

No Special Template: PD-China and PD-1996 as license and Category:Modern Sketch as Category. TaronjaSatsuma (talk) 10:29, 18 February 2020 (UTC)Reply[reply]

Opinions[edit]

Assigned to Progress Bot name Category Modern Sketch

Japanese Homes and their surroundings[edit]

  • Source to upload from: List of files, List of illustrations with names assigned to each number. It would be really nice if the figures contained their original names in teh uploaded filenames.
    • Do the media URLs follow a pattern?:
    • Does the site have an API?:
      • I assume that Gutenberg has an API. If someone can point me at instructions on how to use it with Commons, I might be able to do this myself; I assume this is a beaten path...
    • What else could ease uploading? (is the site valid XHTML, do they use a WCM…?):
    • Did you contact the site owner?:
      • No, for Gutenberg this seems redundant.
      • I uploaded some manually already, with permission, from another site (names files with pattern https://www.kellscraft.com/JapaneseHomes/JapanHomes001.jpg, to JapanHomes301.jpg, 129-130 are duplicates, figure numbers do not align with file names, so combined illustrations cause no disruption to sequential numbering). The Gutenberg images are in better in many, but not all, cases (higher-res, better scan).
      • The same book is also at [1], but the images seem to be worse.
  • Describe the works to be uploaded in detail (audio files, images by …):
  • Which license tag(s) should be applied?:
    • {{PD-old-70-1923}}
    • note: five years from PD-100
  • Is there a template that could be used on the file description pages? Do you think a special template should be created?

Thank you! HLHJ (talk) 04:17, 4 February 2020 (UTC)Reply[reply]

Opinions[edit]

Assigned to Progress Bot name Category

Baseball Hall of Fame[edit]

Is there a practical way to batch extract and upload files that are tagged with "http://rightsstatements.org/vocab/NoC-US/1.0/" under the "Copyright note" section? They basically confirm which files are in the public domain. Or they will sometimes post in that same section "The National Baseball Hall of Fame and Museum is not aware of any U.S. copyright or any other restrictions in the documents."

Oaktree b (talk) 02:16, 23 November 2019 (UTC)Reply[reply]

Opinions[edit]

Assigned to Progress Bot name Category

OpenUp RBINS Beetles collection[edit]

  • Source to upload from: http://projects.biodiversity.be/openuprbins/
    • Do the media URLs follow a pattern? Yes: http://projects.biodiversity.be/openup/rbins/pictures_only/<PICTURE_ID>.jpg
    • Does the site have an API? No
    • What else could ease uploading? (is the site valid XHTML, do they use a WCM…?) Since I helped build the website, I have a CSV file containing metadata for each picture: scientific name, family, location where the beetles was collected, photographer name, ...
    • Did you contact the site owner? Yes. They approve the upload of medium resolution images (such as on the existing website), and may approve later higher resolution versions of those.
  • Describe the works to be uploaded in detail (audio files, images by …): 4,074 detailed pictures of 1,926 different beetles species. See content on http://projects.biodiversity.be/openuprbins/
  • Which license tag(s) should be applied? {{CC-BY-SA-4.0}}
  • Is there a template that could be used on the file description pages? Do you think a special template should be created?

Niconoe (talk) 09:12, 26 June 2019 (UTC)Reply[reply]

Opinions[edit]

Assigned to Progress Bot name Category

GeoDIL[edit]

There are 3096 pictures of rocks and minerals.

  • Source to upload from: https://geodil.dperkins.org/
    • Do the media URLs follow a pattern? The images themselves are /i/NUMBER.jpg. The pages for the images are /h/NUMBER.html. Numbers range from 1-3144 with some gaps.
    • Does the site have an API? No.
    • What else could ease uploading? (is the site valid XHTML, do they use a WCM…?) The site owner uses a script to generate the HTML and the sitemap, /sitemap.xml. That data could be modified if it would make uploading significantly easier. On the back end, information is stored in a CSV, /db/details.csv, should that be useful.
    • Did you contact the site owner? Site owner: Douglas Perkins.
  • Describe the works to be uploaded in detail (audio files, images by …): JPGs of rocks and minerals. Most of these were taken by people working on the GeoDIL project at the University of North Dakota, 2001-2002.
  • Which license tag(s) should be applied? 2,711 are CC Zero, and 14 are government works and PD. The remainder are not freely licensed. All licensed images are noted as such on their HTML pages, and it's also in the sitemap.
  • Is there a template that could be used on the file description pages? Do you think a special template should be created?

Douglas Perkins (talk) 01:14, 10 March 2019 (UTC)Reply[reply]

Opinions[edit]

Assigned to Progress Bot name Category

NPGallery[edit]

  • Describe the works to be uploaded in detail (audio files, images by …):

"NPGallery supports a wide array of digital asset file types (images, MS office formats, adobe pdfs, audio files, videos)." We would, I think, be primarily interested in their photographs of national parks.

  • Which license tag(s) should be applied?

{{PD-USgov}} may apply to many images, but they need to be checked individually. This could probably be automated to some degree.

  • Is there a template that could be used on the file description pages? Do you think a special template should be created?

Standard templates such as {{Photograph}} should be acceptable.

This was spotted by Animalparty on COM:VP. BMacZero (talk) 00:12, 22 January 2019 (UTC)Reply[reply]

Opinions[edit]

  • Comments by Animalparty.
  • {{PD-USGov}} would be the most inclusive template, but is rather vague. More specific templates include {{PD-USGov-NPS}} and {{PD-USGov-Interior}}. Any Photographer field that says "NPS Staff" or "NPS Photo" (e.g. [2]) should automatically get PD-USGov-NPS.
  • I think {{Photograph}} or {{Information}} are fine, ideally with detailed semi custom fields for keywords, collection, location, etc., as seen in the Library of Congress images uploaded by User:Fæ (example).
  • The more pre- or auto-categorization, or at least clearly noting collection, yeear/decade, geographic unit, etc., the better, else we dump thousands of unsorted of images into already cluttered categories like Yosemite National Park.
  • There may be overlap with some material on Archives.gov , individual National Park Flickr feeds/websites, and such material already uploaded. But I think the value of the images uploaded at their largest file size and with curated metadata outweigh the inconvenience of some duplication.
  • Many files have geographical coordinates, but I suspect that many are generic coordinates of the center of the National Park or Monument, rather than being unique to the photograph.
  • Thanks for initiating this, sorry if these comments are basic/obvious to experienced mass uploaders. --Animalparty (talk) 01:29, 22 January 2019 (UTC)Reply[reply]


On some more inspection, certain images may be a bit problematic in terms of copyright, namely works of art (e.g. paintings and sculptures) not explicitly credited to NPS employees, but that are nonetheless labeled "Public domain:Full Granting Rights". Some of these appear to be created by Artist-in-Residence programs (e.g. this gallery and this one), and from browsing elsewhere it appears that different parks may have different rules regarding copyrights. Rocky Mountain National Park states "Artists are also required to provide the copyright for this artwork to the National Park Service. The National Park Service will not allow the commercial use of any donated artwork once it is selected and accessioned into the Park's permanent museum collection", which is a restriction against public domain. Perhaps no art from Rocky Mountain was transferred to NPGallery? These 2 images from the U.S.S. Arizona memorial are labeled PD on NPGallery, yet on a different NPS page their status is ambiguous, with the included usage disclaimer "Multimedia credited with a copyright symbol (indicating that the creator may maintain rights to the work) or credited to any entity other than NPS must not be presumed to be public domain; contact the host park or program to ascertain who owns the material" (emphasis added).

Side note: I think every photograph I've viewed on NPGallery has the Copyright disclaimer "Permission must be secured from the individual copyright owners to reproduce any copyrighted materials contained within this website. Digital assets without any copyright restrictions are public domain.", but every file is also labeled Public domain in the Constraints Information.

Another snag I've noticed, just from browsing the term "Artist", are that some images are scans/photographs from newspapers that were most likely not originally created by Federal employees (although the derivative scans/photos are): for instance Louis Grell illustration album, with cartoons by Louis Grell published in World War I.[3] These are still PD via pre-1924 publication (and possibly by {{PD-USGov-Military}}), but it hinders accurate bot-designation of PD template.

And public domain rationale is ambiguous on this vido, with Copyright" "Photo courtesy of Betty Maya Foott, Colorado Plateau Dark Sky Cooperative" (so, probably not a federal employee), yet is nonetheless labeled "Public domain:Full Granting Rights". I may have just found a relative handful of exceptions. But there are also probably a good deal of historical photographs that are PD-1923 or PD-no-notice yet not US Government works. Perhaps a generic umbrella template similar to {{Flickr-no known copyright restrictions}} could be used to encapsulate different possibilities, like {{PD-NPGallery}}.

I think it would be a good idea to contact someone at NPGallery to double check that all media labeled public domain is in fact public domain, for some reason, especially when rationale is ambiguous or lacking. We also might want to consider not transfering the somewhat intimidating, potentially misleading Copyright message "Permission must be secured from the individual copyright owners to reproduce any copyrighted materials contained within this website. Digital assets without any copyright restrictions are public domain." This may be a liability disclaimer on NPGallery's end, but ideally, everything we transfer to Commons would be in the public domain, and so no permission need be secured. --Animalparty (talk) 11:45, 25 January 2019 (UTC)Reply[reply]

Working on adapting my bot to handle this. I'll contact them, and also start with only things that are obviously PD. BMacZero (talk) 17:50, 9 February 2019 (UTC)Reply[reply]
I e-mailed NPGallery a while back about the public domain statuses of images and neglected to share here. Unfortunately got a not-too-helpful response essentially saying that the licenses and attributions are not "consistent" and "there is not a good way to assure an asset id is truly in the public domain, or not". We'll have to figure out what types of signals we can rely on to decide whether {{PD-USGov-NPS}} or other templates apply. Of course, publication pre-1924 will be a good one to start. BMacZero (talk) 04:30, 11 April 2019 (UTC)Reply[reply]
I'm currently harvesting a list of all the images. It's going a bit slow but it should only a take a few days. After that I'll start downloading the metadata, which may take several days. BMacZero (talk) 04:45, 12 April 2019 (UTC)Reply[reply]
Ah, a shame about the inconsistent licensing criteria. I guess pre-1924 and files credited to "NPS staff" or similar can be prioritized for now. --Animalparty (talk) 19:13, 12 April 2019 (UTC)Reply[reply]
Started downloading the item metadata. You can check on the progress on this fun page I made. BMacZero (talk) 15:49, 13 April 2019 (UTC)Reply[reply]
BRFA filed (Commons:Bots/Requests/BMacZeroBot 6). BMacZero (talk) 05:35, 10 May 2019 (UTC)Reply[reply]
Started uploading last night, will probably be ongoing for quite a while. See Category:Images from NPGallery to check to help with validation and categorization! – BMacZero (🗩) 16:35, 29 June 2019 (UTC)Reply[reply]
Assigned to Progress Bot name Category
User:BMacZero In progress User:BMacZeroBot Category:Images from NPGallery to check
See Also

APPLAUSE[edit]

  • Does the site have an API? Yes: 101_xxxx (x is a variable number)
  • What else could ease uploading? (is the site valid XHTML, do they use a WCM…?) https://www.plate-archive.org/query/
  • Did you contact the site owner? No
  • Describe the works to be uploaded in detail (audio files, images by …): Historical astronomical plates, logbooks, envelopes or notes https://www.plate-archive.org/applause/info/gallery/ (we don't need to upload all, but I think the plates would be insteresting.
  • Which license tag(s) should be applied?

The database is licensed under CC-0 (https://www.plate-archive.org/applause/project/disclaimer/)

  • Is there a template that could be used on the file description pages? Do you think a special template should be created? Yes, I think a template should be created.

Habitator terrae 🌍 16:37, 27 October 2018 (UTC)Reply[reply]

Opinions[edit]

Assigned to Progress Bot name Category

PauloGuedes[edit]

This url generates 94 results pages, each linking to 10 individual image pages. Each image page url is
http://arquivomunicipal2.cm-lisboa.pt/X-arqWeb/ContentPage.aspx?ID=code&Pos=1&Tipo=PCD
while the image in it is at
http://arquivomunicipal2.cm-lisboa.pt/X-arqWeb/ContentDisplay.aspx?ID=code&Pos=1&Tipo=PCD&Thb=0
with code being a 20-digit lower-case hex number — which has no bearing with the official identification references (cota — see below).
  • Does the site have an API? dunno
    • What else could ease uploading? (is the site valid XHTML, do they use a WCM…?) consistent, machine-generated HTML (parsable, even if not necessarily valid)
    • Did you contact the site owner? No
  • Describe the works to be uploaded in detail (audio files, images by …): Smallish batch (711, according to inventory, or 933, according with the database search report) of scanned b/w photos in various hardcopy formats.
  • Is there a template that could be used on the file description pages? Do you think a special template should be created? {{AMLx}}; it needs to be fed at least {{{cota}}} (given also as código de referência), a slashed crumbthread-like alphanumeric string of variable length; other values to be (trivially) extracted from each image page are:
  • Título
  • Assunto
  • Data(s)
  • Dimensão e suporte
  • Nota(s)
  • Cotas antigas or Cotas or Cota(s)
The filenames can be constructed from Título (possibly trimmed) and the two last crumbs of {{{cota}}}, in parenthesis, devoided of the slash (which is one of the Cotas)

-- Tuválkin 16:54, 30 June 2018 (UTC)Reply[reply]

Opinions[edit]

Assigned to Progress Bot name Category

VOA News files[edit]

  • Source to upload from: https://web.archive.org/web/*/https://www.voanews.com/mp3/voa/english/nnow/NNOW_HEADLINES.mp3
    • Do the media URLs follow a pattern? They all have the same name. The date when archived is given in 14 digits, with the first eight digits being the year, month, and day respectively, with the remaining digits being the time of day archived, in UTC.
    • Does the site have an API? Don't know.
    • What else could ease uploading? (is the site valid XHTML, do they use a WCM…?) Don't know.
    • Did you contact the site owner? No need to, since U.S. government works so public domain.
  • Describe the works to be uploaded in detail (audio files, images by …): VOA world news headline newscast audio files for (almost) every day spanning from 5 May 2009 to 6 July 2019.
  • Is there a template that could be used on the file description pages? Do you think a special template should be created? Just use the standard one. Upload as "VOA News Headlines (MONTH DAY, YEAR)". If possible, upload them in FLAC, WAV, and OGG.

Illegitimate Barrister (talkcontribs), 13:07, 26 May 2019 (UTC)Reply[reply]

Opinions[edit]

Assigned to Progress Bot name Category

HiRISE[edit]

  • Describe the works to be uploaded in detail (audio files, images by …):
    Images by HiRISE (High Resolution Imaging Science Experiment)
  • Which license tag(s) should be applied?
  • As explained in each image's description page for example: "All of the images produced by HiRISE and accessible on this site are within the public domain: there are no restrictions on their usage by anyone in the public, including news or science organizations. We do ask for a credit line where possible: NASA/JPL/University of Arizona"
  • PD-USGov-NASA or a variation of it to include JPL and University of Arizona must be used.
  • Is there a template that could be used on the file description pages? Do you think a special template should be created?
There is no template yet. It must be created to include all the relevant data e.g. Acquisition date, Latitude , Longitude , etc. from the label files.
  • Note: Due to JPEG2000 not being currently supported on Wikimedia Commons, a conversion to PNG is also needed. File sizes may be large!

Meisam (talk) 21:58, 20 June 2018 (UTC)Reply[reply]

Opinions[edit]

  •  Support Seems like an interesting project --Kristbaum (talk) 16:00, 8 May 2019 (UTC)Reply[reply]
  •  Info Template:PD-NASA-HiRISE has been created for these images! -- Meisam (talk) 17:31, 11 May 2019 (UTC)Reply[reply]
    @Meisam: - I am interested in pursuing this. I think it would be a logical extension of my work with uploading from ESRS. Do you have any suggestion as to how we most efficiently store the PDS data with each image? Askeuhd (talk) 08:28, 6 June 2022 (UTC)Reply[reply]
    @Askeuhd: I don’t have any good solutions. I suppose we can store them as tEXt chunks in the PNG image and also add them in a table (using wiki templates) to the image description page. -- Meisam (talk) 11:30, 6 June 2022 (UTC)Reply[reply]
    @Meisam: - I would personally much prefer the latter option, I fear that the former option may not be very user friendly. We will also have to parse as much of the data as possible to SDC. I will try to think of a suitable paradigm. Askeuhd (talk) 11:40, 6 June 2022 (UTC)Reply[reply]
Assigned to Progress Bot name Category

PDS data import proposal[edit]

Proposal for the import of PDS data for each image, to ensure as much as possible is added to SDC and necessary data for the user is displayed prominently in the wikitext.

I propose that the entire LBL file is imported as a collapsible text field in the template for each file, preserving all formatting and indentation, so that researches or other users familiar with the PDS format may be able to utilize plain text search for the values we will not be able to add to SDC, similar to this:

Raw Planetary Data System data

PDS Content of LBL file

In addition to this, I have broken down the example file, to try and maximize possible SDC data migration, as well as adding some of the data to a custom wikidata template for this particular import. I am highly interested in any suggestions. I take the libery of pinging @Multichill: as you have previously been very helpful in a similar endeavour with the ISS photos. I hope you would be interested in adding your valuable input here as well.

I will make a couple of example files in a few days or so to test the SDC structure and a potential template, before starting any peliminary coding, so the concepts can be tested out.

Reference to be set as stated in (P248) --> "Planetary Data System" for all SDC values imported from PDS.

All PDS identifiers can be looked up here for clarification. The LBL example file also contains some in-line comments.

Paradigm for importing PDS data - based on example ESP_053850_2170_RED.LBL
PDS Identifier PDS Value in example file Commons/SDC identifier Commons/SDC value
PDS_VERSION_ID PDS3 N/A N/A
NOT_APPLICABLE_CONSTANT -9998 N/A N/A
DATA_SET_ID "MRO-M-HIRISE-3-RDR-V1.1" N/A N/A
DATA_SET_NAME "MRO MARS HIGH RESOLUTION IMAGING SCIENCE EXPERIMENT RDR V1.1" part of the series (P179) Appropriate wikidata-entity for this property (to be created)
PRODUCER_INSTITUTION_NAME "UNIVERSITY OF ARIZONA" affiliation (P1416) as a qualifier to creator (P170) University of Arizona (Q503419)
PRODUCER_ID "UA" N/A N/A
PRODUCER_FULL_NAME "ALFRED MCEWEN" creator (P170) "some value" --> "Alfred McEwen"
OBSERVATION_ID "ESP_053850_2170" catalog code (P528) and {{NASA-image}} "some value" --> "ESP_053850_2170", qualified with catalog (P972) and an appropriate wikidata-entity for this property (to be created).
PRODUCT_ID "ESP_053850_2170_RED" N/A N/A
PRODUCT_VERSION_ID "1.0" N/A N/A
INSTRUMENT_HOST_NAME "MARS RECONNAISSANCE ORBITER" location of creation (P1071) & location of the point of view (P7108) Mars Reconnaissance Orbiter (Q183160)
INSTRUMENT_HOST_ID "MRO" N/A N/A
INSTRUMENT_NAME "HIGH RESOLUTION IMAGING SCIENCE EXPERIMENT" captured with (P4082) HiRISE (Q1036092)
INSTRUMENT_ID "HIRISE" N/A N/A
TARGET_NAME "MARS" depicts (P180) Mars (Q111)
MISSION_PHASE_NAME "EXTENDED SCIENCE PHASE" significant event (P793) Appropriate wikidata-entity for this property (to be created)
ORBIT_NUMBER 53850 orbits completed (P1418) value: 53850 possibly qualified with type of orbit (P522) --> areocentric orbit (Q3884965)
SOURCE_PRODUCT_ID (ESP_053850_2170_RED0_0, ESP_053850_2170_RED0_1, ESP_053850_2170_RED1_0, ESP_053850_2170_RED1_1, ESP_053850_2170_RED2_0, ESP_053850_2170_RED2_1, ESP_053850_2170_RED3_0, ESP_053850_2170_RED3_1, ESP_053850_2170_RED4_0, ESP_053850_2170_RED4_1, ESP_053850_2170_RED5_0, ESP_053850_2170_RED5_1, ESP_053850_2170_RED6_0, ESP_053850_2170_RED6_1, ESP_053850_2170_RED7_0, ESP_053850_2170_RED7_1, ESP_053850_2170_RED8_0, ESP_053850_2170_RED8_1) N/A N/A
RATIONALE_DESC "Monitoring new impact site" to be added to {{En}} in main template - We might also go over all LBL files to search for obvious depicts (P180) statements "PDS description: Monitoring new impact site"
SOFTWARE_NAME "PDS_to_JP2 v3.19 (1.53 2012/01/24 03:07:27)" I was unable to find an appropriate wikidata property here, but I feel like there should be one ?
OBJECT = IMAGE_MAP_PROJECTION
DATA_SET_MAP_PROJECTION "DSMAP.CAT" N/A N/A
MAP_PROJECTION_TYPE "EQUIRECTANGULAR" I was unable to find the appropriate wikidata property for "projection", I might be looking in the wrong place. spatial reference system (P3037) was the closest I got equidistant cylindrical projection (Q1326965)
PROJECTION_LATITUDE_TYPE PLANETOCENTRIC N/A (coordinates given by globe planetocentric Martian coordinates (Q106948918) are planetocentric) N/A
A_AXIS_RADIUS 3389.5743490888 <KM> N/A (simply the mean readius of Mars) N/A
B_AXIS_RADIUS 3389.5743490888 <KM> N/A (simply the mean readius of Mars) N/A
C_AXIS_RADIUS 3389.5743490888 <KM> N/A (simply the mean readius of Mars) N/A
COORDINATE_SYSTEM_NAME PLANETOCENTRIC N/A (coordinates given by globe planetocentric Martian coordinates (Q106948918) are planetocentric) N/A
POSITIVE_LONGITUDE_DIRECTION EAST N/A (coordinates given by globe planetocentric Martian coordinates (Q106948918) are east-positive) N/A
KEYWORD_LATITUDE_TYPE PLANETOCENTRIC N/A (coordinates given by globe planetocentric Martian coordinates (Q106948918) are planetocentric) N/A
POSITIVE_LONGITUDE_DIRECTION EAST N/A (coordinates given by globe planetocentric Martian coordinates (Q106948918) are east-positive) N/A
KEYWORD_LATITUDE_TYPE PLANETOCENTRIC N/A (coordinates given by globe planetocentric Martian coordinates (Q106948918) are planetocentric) N/A
CENTER_LATITUDE 35.000 <DEG> See below See below
CENTER_LONGITUDE 180.000 <DEG> coordinates of depicted place (P9149) - not completely sure though, as it the example specifically comments that the location is the center of the projection not necessarily the center of the image. So it may not be so helpful to import this value as the coordinates of depicted place (P9149) - see bounding values below { "latitude": 35, "longitude": 180, "precision": 0.001, "globe": "http://www.wikidata.org/entity/Q106948918" }
LINE_FIRST_PIXEL 1 N/A N/A
LINE_LAST_PIXEL 32134 N/A N/A
SAMPLE_FIRST_PIXEL 1 N/A N/A
SAMPLE_LAST_PIXEL 25483 N/A N/A
MAP_PROJECTION_ROTATION 0.0 <DEG> N/A N/A
MAP_RESOLUTION 236636.93053097 <PIX/DEG> angular resolution (P3439) converted to milliarcseconds/pixel (1/236636.93053097*3600000) value: 15.21317907531, unit: milliarcsecond (Q21500224)
MAP_SCALE 0.25 <METERS/PIXEL> I was unable to find an appropriate wikidata property here, something like "ground sample distance" or similar - I think it should be included as custom field in the wikitext template for each image, as it is a very commonly needed figure N/A
MAXIMUM_LATITUDE 36.973920949851 <DEG> coordinates of northernmost point (P1332) { "latitude": 36.973920949851, "longitude": 148.23651113052, "precision": 0.000000000001, "globe": "http://www.wikidata.org/entity/Q106948918" } <-- longitude set to westermost longitude clarified by syntax clarification (P2916) qualifier.
MINIMUM_LATITUDE 36.838131126084 <DEG> coordinates of southernmost point (P1333) { "latitude": 36.838131126084, "longitude": 148.36797304112, "precision": 0.000000000001, "globe": "http://www.wikidata.org/entity/Q106948918" } <-- longitude set to easternmost longitude clarified by syntax clarification (P2916) qualifier.
LINE_PROJECTION_OFFSET 8749396.5 <PIXEL> N/A N/A
SAMPLE_PROJECTION_OFFSET 6157087.5 <PIXEL> N/A N/A
EASTERNMOST_LONGITUDE 148.36797304112 <DEG> coordinates of easternmost point (P1334) { "latitude": 36.973920949851, "longitude": 148.36797304112, "precision": 0.000000000001, "globe": "http://www.wikidata.org/entity/Q106948918" } <-- latitude set to maximum latitude clarified by syntax clarification (P2916) qualifier.
WESTERNMOST_LONGITUDE 148.23651113052 <DEG> coordinates of westernmost point (P1335) { "latitude": 36.838131126084, "longitude": 148.23651113052, "precision": 0.000000000001, "globe": "http://www.wikidata.org/entity/Q106948918" } <-- latitude set to minimum latitude clarified by syntax clarification (P2916) qualifier.
GROUP = TIME_PARAMETERS
MRO:OBSERVATION_START_TIME 2018-01-21T12:51:50.434 N/A N/A
START_TIME 2018-01-21T12:51:50.582 N/A N/A
SPACECRAFT_CLOCK_START_COUNT "1201006358:10651" N/A N/A
STOP_TIME 2018-01-21T12:51:53.012 inception (P571) and date field in template date used as value for wikidata property, full time string parsed to data field for date field in wikitext template
SPACECRAFT_CLOCK_STOP_COUNT "1201006360:38785" N/A N/A
PRODUCT_CREATION_TIME 2018-01-25T05:01:36 publication date (P577) but I am not completely sure here date used as value for wikidata property.
GROUP = INSTRUMENT_SETTING_PARAMETERS
MRO:CCD_FLAG (ON, ON, ON, ON, ON, ON, ON, ON, ON, OFF, ON, ON, ON, ON) N/A N/A
MRO:BINNING (1, 1, 1, 1, 1, 1, 1, 1, 1, -9998, -9998, -9998, -9998, -9998) N/A N/A
MRO:TDI (128, 128, 128, 128, 128, 128, 128, 128, 128, -9998, -9998, -9998, -9998, -9998) N/A N/A
MRO:SPECIAL_PROCESSING_FLAG (NOMINAL, NOMINAL, NOMINAL, NOMINAL, NOMINAL, NOMINAL, NOMINAL, NOMINAL, NOMINAL, "NULL", "NULL", "NULL", "NULL", "NULL") N/A N/A
GROUP = VIEWING_PARAMETERS
INCIDENCE_ANGLE 42.714413 <DEG> N/A N/A
EMISSION_ANGLE 0.434473 <DEG> tilt (P8208) value: 0.434473 --> unit degree (Q28390)
PHASE_ANGLE 42.502572 <DEG> N/A N/A
LOCAL_TIME 15.10520 <LOCALDAY/24> N/A N/A
SOLAR_LONGITUDE 118.215906 <DEG> N/A N/A
SUB_SOLAR_AZIMUTH 173.163664 <DEG> N/A N/A
NORTH_AZIMUTH 270.000000 <DEG> N/A N/A
OBJECT = COMPRESSED_FILE
FILE_NAME "ESP_053850_2170_RED.JP2" N/A N/A
RECORD_TYPE UNDEFINED N/A N/A
ENCODING_TYPE "JP2" N/A N/A
ENCODING_TYPE_VERSION_NAME "ISO/IEC15444-1:2004" N/A N/A
INTERCHANGE_FORMAT BINARY N/A N/A
UNCOMPRESSED_FILE_NAME "ESP_053850_2170_RED.IMG" N/A N/A
REQUIRED_STORAGE_BYTES 1637741444 <BYTES> N/A N/A
DESCRIPTION "JP2INFO.TXT" N/A N/A
INTERCHANGE_FORMAT BINARY N/A N/A
OBJECT = UNCOMPRESSED_FILE
FILE_NAME "ESP_053850_2170_RED.IMG" N/A N/A
RECORD_TYPE FIXED_LENGTH N/A N/A
RECORD_BYTES 50966 <BYTES> N/A N/A
FILE_RECORDS 32134 N/A N/A
IMAGE "ESP_053850_2170_RED.IMG" N/A N/A
DESCRIPTION "HiRISE projected and mosaicked product" Could potentially be added to {{En}} in description N/A
LINES 32134 N/A N/A
LINE_SAMPLES 25483 N/A N/A
BANDS 1 N/A N/A
SAMPLE_TYPE MSB_UNSIGNED_INTEGER N/A N/A
SAMPLE_BITS 16 N/A N/A
SAMPLE_BIT_MASK 2#0000001111111111# N/A N/A
SCALING_FACTOR 1.41615214363203e-04 N/A N/A
BANDS 1 N/A N/A
OFFSET 0.060336154982679 N/A N/A
BAND_STORAGE_TYPE BAND_SEQUENTIAL N/A N/A
CORE_NULL 0 N/A N/A
CORE_LOW_REPR_SATURATION 1 N/A N/A
CORE_LOW_INSTR_SATURATION 2 N/A N/A
CORE_HIGH_REPR_SATURATION 1023 N/A N/A
CORE_HIGH_INSTR_SATURATION 1022 N/A N/A
CENTER_FILTER_WAVELENGTH 700 <NM> Should be added to wikitext template along with FILTER_NAME as the images will be uploaded as PNG 16 bit grayscale N/A
MRO:MINIMUM_STRETCH 3 N/A N/A
MRO:MAXIMUM_STRETCH 1021 N/A N/A
FILTER_NAME "RED" N/A N/A

Additionally I propose the following the properties


@Meisam: --Askeuhd (talk) 16:08, 7 June 2022 (UTC)Reply[reply]

freepd.com[edit]

Site contains production music tracks, in various genres, mp 3 format.

  • Source to upload from:

http://freepd.com/

    • Do the media URLs follow a pattern?

None found. Tracks seem to be in sub-directories related to nominal genre, MP3 files are named for the track title apparently.

    • Does the site have an API?

Unknown.

    • What else could ease uploading? (is the site valid XHTML, do they use a WCM…?)

Unknown.

    • Did you contact the site owner?

Site owner not contacted.

  • Describe the works to be uploaded in detail (audio files, images by …):

"Production music", in various genres., in MP3 format.

  • Which license tag(s) should be applied?

Site claims tracks are in the public domain:- http://freepd.com/faq.html ; However some of these tracks were previously under CC-BY on the site owners other site at incompetech.

  • Is there a template that could be used on the file description pages? Do you think a special template should be created?

{{Information}} with additional field as was done on the previous batch upload for incompetech.

ShakespeareFan00 (talk) 10:20, 18 December 2017 (UTC)Reply[reply]

Opinions[edit]

Assigned to Progress Bot name Category

Commons:Batch uploading/timbeek.com/[edit]

  • Source to upload from:

http://timbeek.com/ in particular music tracks listed in http://timbeek.com/royalty-free-music/isrc/

    • Do the media URLs follow a pattern?

No general pattern, but there's a master list (not sure if it's complete) of track pages here - http://timbeek.com/royalty-free-music/isrc/, Donwload links in the UI seem to link to numbered subdirectories, but general pattern undetermined or not obvious.

    • Does the site have an API?

Unknown.

    • What else could ease uploading? (is the site valid XHTML, do they use a WCM…?)

Unknown

    • Did you contact the site owner?

Site owner not contacted.

  • Describe the works to be uploaded in detail (audio files, images by …):

A Small set of 'production music' tracks, in assorted genres.


  • Which license tag(s) should be applied?

See: http://timbeek.com/royalty-free-music/license/ , assuming attribution requirments are met the music appears to be under CC-BY 4.0. (see also: http://timbeek.com/royalty-free-music/faq/ and http://timbeek.com/royalty-free-music/copyright/)

  • Is there a template that could be used on the file description pages? Do you think a special template should be created?

{{Information}} with additional fields as was previously implemented for the incomptech.com batch upload(this site seems to use a simmilar approach).

ShakespeareFan00 (talk) 19:05, 15 December 2017 (UTC)Reply[reply]

Opinions[edit]

Assigned to Progress Bot name Category

Images of listed buildings by Stephen Richards on Geograph.org.uk[edit]

  • Source to upload from: http://www.geograph.org.uk
    • Do the media URLs follow a pattern? Yes: http://www.geograph.org.uk/photo/[ID]
    • Does the site have an API? Yes: http://www.geograph.org.uk/help/api
    • What else could ease uploading? (is the site valid XHTML, do they use a WCM…?) Don't know
    • Did you contact the site owner? No need
  • Describe the works to be uploaded in detail (audio files, images by …):
All photographs of listed buildings by this user are of high quality and are tagged [listed building]. They would be very useful to have on Commons as every listed building has an item on Wikidata. I'd like them to be uploaded en masse and given the categories Category:Listed buildings in [county or London borough] and Category:Images by Stephen Richards. I could then further refine the listed building categories manually. However, the terms "Grade I", "Grade II*" and "Grade II" (the three listing grades for buildings in England and Wales) appear in the image descriptions, so is there a way that these could be picked out and used to categorise the images on Commons?
  • Which license tag(s) should be applied?
{{Geograph}}
  • Is there a template that could be used on the file description pages? Do you think a special template should be created?
{{Geograph}}

Ham II (talk) 19:50, 16 November 2017 (UTC)Reply[reply]

Opinions[edit]

@Ham II: first time I notice this. The GeographBot is uploading again (for quite some time already). It started at one many years ago and it's now at 3645078 which was contributed 9 September, 2013. It's slowly catching up and at some point all the files you were looking for will also be uploaded. Multichill (talk) 09:31, 17 July 2022 (UTC)Reply[reply]
Assigned to Progress Bot name Category

USDA NRCS Plants Database[edit]

  • Source to upload from: http://plants.usda.gov/
    • Do the media URLs follow a pattern? Yes.
    • Does the site have an API? No.
    • What else could ease uploading? (is the site valid XHTML, do they use a WCM…?) valid XHTML
    • Did you contact the site owner? No.
  • Describe the works to be uploaded in detail (audio files, images by …): Public domain: 10771 photos and 7064 line drawings, with species information for categorization. There are other copyrighted images as well, some of which may be freely licensed.
  • Which license tag(s) should be applied?

{{PD-USGov-USDA-NRCS}}

  • Is there a template that could be used on the file description pages? Do you think a special template should be created?

Opinions[edit]

@Guanaco: There is a lot of copyrighted material within these images, e.g. [4] [5]. (Just because this is a U.S. government web site this does not mean all the material is U.S. government material and by this means freely usable!) Actually I have not found too many images that really can be used (e.g. [6]). You should at least provide a procedure how to distinguish between copyrighted and free material. --Reinhard Kraasch (talk) 11:02, 9 July 2017 (UTC)Reply[reply]

@Reinhard Kraasch: The gallery search function [7] has a filter by copyright status. [8]
I've found that the URLs linked by the thumbnails provide species information within <title>: https://plants.usda.gov/core/profile?symbol=HACA2&photoID=haca2_003_ahp.jpg#
The search is navigable with &page=2, 3, 4, etc.
I'm actually interested in scripting this myself now, though it would be my first batch upload task. Guanaco (talk) 14:23, 9 July 2017 (UTC)Reply[reply]
@Guanaco: Well, just go on... On the other hand it always is a good idea to have a second opinion with such a batch upload - especially for the non-technical aspects. --Reinhard Kraasch (talk) 20:52, 10 July 2017 (UTC)Reply[reply]
Assigned to Progress Bot name Category

US National Archives[edit]

I am hoping to begin a bulk upload of media from the US National Archives in the next few weeks. This will be a very different approach from the first upload, which was based on uploading files from an offline drive and scraping HTML for the metadata. This time around, NARA has an API for our online catalog, and so I am building a bot, using mwclient, to upload using the live metadata and files from the API. Some details:

Dataset

The dataset includes all PD materials at https://catalog.archives.gov (API: https://catalog.archives.gov/api/v1). I plan to begin with a series of ~100,000 WWI-era photos. Technically, there are over 15 million files (and counting) in this dataset.

File names

The script is currently configured to name files with the formula: For single-page items:

  • "File:[TITLE] - NARA - [NAID].ext"
    Where "[TITLE]" is the catalog record's title field, and "[NAID]" is the National Archives Identifier. If this is over the character limit, "[TITLE]" is automatically truncated, with "(...)" appended.

For multi-page items (since the above formula would give all files belonging to one catalog record the same title):

  • "File:[TITLE] - NARA - [NAID] (page X).ext"
Metadata

We are developing a custom metadata mapping, since NARA does not adhere to a metadata standard. You can see the metadata template we use here: {{NARA-image-full}}. Some notes:

While all the records in this catalog come from NARA or partner institutions, there are many different facility locations, and some NARA facilities have their own institutions templates already (e.g. US presidential libraries). Therefore, I am creating institution templates to go along with all NARA locations, and the script will insert the correct institution template based on a mapping.

NARA's authority file is not yet mapped to Wikidata, however that is definitely something that would be useful in the future. For now, we will upload files with NARA's creator and author names and their NAIDs and links back to the catalog authority record. However, including the NAIDs in a Commons template field means that in the future, Wikidata could be used to make creator templates appear instead. Any help with this would be appreciated.

Licenses

Because NARA records are nearly all (>99%) derived from the records of US federal agencies, these uploads will use {{PD-USGov}} or its subtemplates. Most NARA records are in one of about 600 record groups based on their creating agency, so I am using a mapping of NARA record groups to Commons PD-USGov templates so that the bot can apply the more specific agency templates in most cases. Help filling out this mapping would be appreciated.

Nearly all holdings of the US National Archives are in the public domain as a work of the federal government (or, otherwise, due to age). This is marked in the "use restriction" field in the catalog, with a value of "Unrestricted" indicating public domain determination by the archivists. Therefore, the script will be configured to skip over any records in which the use restriction is anything other than "unrestricted" (even "possibly" ones, which could ultimately be PD, but need a human determination).

Categories

All uploads will be automatically categorized by the metadata template into Category:Media contributed by the National Archives and Records Administration and a category for the series they belong to (such as Category:US National Archives series: DOCUMERICA: The Environmental Protection Agency's Program to Photographically Document Subjects of Environmental Concern, compiled 1972 - 1977). Eventually, the script will be designed to create the series category if a file is uploaded for a series which does not yet have one.

When it comes to topical categories, past NARA uploads utilized the {{Uncategorized}} tag to encourage the community to add topical tags. However, since this creates work for the community, I am planning this time around to run uploads a small batch (hundreds to a few thousand) at a time, so I can upload them with one or more topical categories that apply to all records in the batch, rather than uncategorized.

Code

You can find the upload bot's code at https://github.com/usnationalarchives/wikimedia-upload. This project is being developed in public on NARA's official GitHub account. I would welcome collaboration (pull requests or otherwise) there. In addition, the Commons community is welcome to file issue reports on that repo.

Examples

The most recent test uploads can be viewed in Category:US National Archives series: American Unofficial Collection of World War I Photographs. I am still polishing the upload script, but these examples essentially represent what should be expected from the bot once it gets started.

Opinions[edit]

The bot account is technically already flagged from the last bulk upload a couple of years ago, however I would like to submit the current plan to community review before restarting uploads. If there are any opinions on the bot's design or the format of uploads or other issues, I am happy to hear them. We'd also like to know whether to limit what is uploaded in any way—as in, would Commons actually be interested in 15 million files, or might some of these, like the millions of census cards, not be of interest. Also, if anyone is interested in helping out with the coding or other tasks, please feel free to let me know. This is a big undertaking. Thanks! Dominic (talk) 17:25, 31 May 2017 (UTC)Reply[reply]


Assigned to Progress Bot name Category
User:Dominic Coding User:US National Archives bot Category:Media contributed by the National Archives and Records Administration

ESA-Rosetta-NAVCAM[edit]

  • Describe the works to be uploaded in detail (audio files, images by …):
Images the comet 67P/CHURYUMOV-GERASIMENKO by the NAVCAM on the Rosetta spacecraft.


  • Is there a template that could be used on the file description pages? Do you think a special template should be created?

Yann (talk) 14:32, 6 June 2015 (UTC)Reply[reply]

Opinions[edit]

Assigned to Progress Bot name Category

USC Cinema[edit]

Source to upload from

https://archive.org/details/usc-sound-effect-archive

License

The files in this collection claim to be licensed CC BY 4.0. This is not true--all this archive is was someone collating the files and uploading them there. These files were all uploaded by Craig Smith on freesound.com under a CC0. The Gold and Red files are a valid {{CC0}} as Craig Smith works for USC. The Sunset Editorial files are all either {{PD-US-defective notice}} or {{PD-US-defective notice-1978-89}}. The notices are defective because according to the linked blog, all SSE ever got was a credit line. The company was no longer active by 1989, and I checked and there are no copyright registrations under SSE's name. The publication years of the sound effects however, are unknown, so I plan on tagging everything with PD-US-defective notice-1978-89.

Description

This is a set of audio files by the University of Southern California and Sunset Editorial consisting of the original recordings of sound effects used in movies from the 60s to 80s; a few of these sound effects are very famous (like the Wilhelm Scream). This file conveniently maps all the sound effects with a metadata .csv file with descriptions and upload dates and everything, so setting up a batch upload isn't too difficult. I'm prepared to do this upload myself.

Snowmanonahoe (talk) 02:21, 27 May 2023 (UTC)Reply[reply]

Opinions[edit]

Assigned to Progress Bot name Category

Old requests (before 2020-01-01)[edit]

Batch uploads in progress

Batch uploads on hold[edit]

Done (to be moved to past batch uploads)[edit]

Failed[edit]

Scripters[edit]

Currently inactive[edit]

Tools[edit]

Scripts, Examples and Information[edit]