Compare commits

..

No commits in common. "5c42677eac754872bd9e9d5791e2f82d89963400" and "ac6bac09032a0e5049cd9c1b21b752a135737cf4" have entirely different histories.

3 changed files with 5 additions and 14 deletions

View file

@ -9,7 +9,6 @@ Build with the [Zola] static site compiler:
Generate dates in front-matter from vim: Generate dates in front-matter from vim:
:r! date +\%Y-\%m-\%dT\%H:\%M:\%S\%:z :r! date +\%Y-\%m-\%dT\%H:\%M:\%S\%:z
:r! date -Iseconds
## Terminal screenshots ## Terminal screenshots

View file

@ -2,8 +2,8 @@
title = "Exporting YouTube Subscriptions to OPML and Watching via RSS" title = "Exporting YouTube Subscriptions to OPML and Watching via RSS"
date = 2024-05-06T10:38:22+10:00 date = 2024-05-06T10:38:22+10:00
[extra] #[extra]
updated = 2024-05-06T20:24:23+10:00 #updated = 2024-02-21T10:05:19+10:00
+++ +++
This post describes how I exported my 500+ YouTube subscriptions to an OPML This post describes how I exported my 500+ YouTube subscriptions to an OPML
@ -77,7 +77,7 @@ which means I needed to determine the channel id for each page. To do that
without futzing around with Google API keys and APIs I needed to download the without futzing around with Google API keys and APIs I needed to download the
HTML of each channel page. HTML of each channel page.
First I generated a config file for `curl` from the JSON file: To do that I generated a config file for `curl` from the JSON file:
jaq --raw-output '.[] | (split("/") | last) as $name | "url \(.)\noutput \($name).html"' subscriptions.json > subscriptions.curl jaq --raw-output '.[] | (split("/") | last) as $name | "url \(.)\noutput \($name).html"' subscriptions.json > subscriptions.curl
@ -145,12 +145,6 @@ Let's break that down:
- `TITLE`, `XML_URL`, and `URL` are escaped. - `TITLE`, `XML_URL`, and `URL` are escaped.
- Finally we generate a JSON object with the title, URL, and RSS URL and write it into a `json` directory under the name of the channel. - Finally we generate a JSON object with the title, URL, and RSS URL and write it into a `json` directory under the name of the channel.
**Update:** [Stephen pointed out on Mastodon][sedmonds] that the HTML contains the usual
`<link rel="alternate"` tag for RSS auto-discovery. I did check for that initially but
I think the Firefox dev tools where having a bad time with the large size of the YouTube
pages and didn't show me any matches at the time. Anyway, that could have been used to
find the feed URL directly instead of building it from the `og:url`.
Ok, almost there. That script had to be run for each of the channel URLs. Ok, almost there. That script had to be run for each of the channel URLs.
First I generated a file with just a plain text list of the channel URLs: First I generated a file with just a plain text list of the channel URLs:
@ -224,7 +218,7 @@ It does the following:
- Indent the OPML document. - Indent the OPML document.
- Write it to stdout using a Unicode encoding with an XML declaration (`<?xml version='1.0' encoding='utf-8'?>`). - Write it to stdout using a Unicode encoding with an XML declaration (`<?xml version='1.0' encoding='utf-8'?>`).
Whew that was a lot! With the OPML file generated I was finally able to import Whew that was a lot! With the OMPL file generated I was finally able to import
all my subscriptions into Feedbin. all my subscriptions into Feedbin.
All the code is available in [this All the code is available in [this
@ -299,4 +293,3 @@ option.
[feedbin-sharing]: https://feedbin.com/help/sharing-read-it-later-services/ [feedbin-sharing]: https://feedbin.com/help/sharing-read-it-later-services/
[OpenGraph]: https://ogp.me/ [OpenGraph]: https://ogp.me/
[feedbin-search]: https://feedbin.com/help/search-syntax/ [feedbin-search]: https://feedbin.com/help/search-syntax/
[sedmonds]: https://aus.social/@popcorncx/112392881683597817

View file

@ -36,7 +36,6 @@ pre, code {
padding: 0.1em 0.2em; padding: 0.1em 0.2em;
font-size: 16px; font-size: 16px;
border-radius: 3px; border-radius: 3px;
overflow-wrap: anywhere;
} }
pre { pre {
padding: 0.5em 1em; padding: 0.5em 1em;