From 5c42677eac754872bd9e9d5791e2f82d89963400 Mon Sep 17 00:00:00 2001 From: Wesley Moore Date: Mon, 6 May 2024 20:31:03 +1000 Subject: [PATCH] Add note about --- v2/README.md | 3 ++- .../posts/2024/youtube-subscriptions-opml/index.md | 13 ++++++++++--- 2 files changed, 12 insertions(+), 4 deletions(-) diff --git a/v2/README.md b/v2/README.md index 5c2e10a..277cbc0 100644 --- a/v2/README.md +++ b/v2/README.md @@ -8,7 +8,8 @@ Build with the [Zola] static site compiler: Generate dates in front-matter from vim: - :r! date +\%Y-\%m-\%dT\%H:\%M:\%S\%:z + :r! date +\%Y-\%m-\%dT\%H:\%M:\%S\%:z + :r! date -Iseconds ## Terminal screenshots diff --git a/v2/content/posts/2024/youtube-subscriptions-opml/index.md b/v2/content/posts/2024/youtube-subscriptions-opml/index.md index e70fef8..53c332d 100644 --- a/v2/content/posts/2024/youtube-subscriptions-opml/index.md +++ b/v2/content/posts/2024/youtube-subscriptions-opml/index.md @@ -2,8 +2,8 @@ title = "Exporting YouTube Subscriptions to OPML and Watching via RSS" date = 2024-05-06T10:38:22+10:00 -#[extra] -#updated = 2024-02-21T10:05:19+10:00 +[extra] +updated = 2024-05-06T20:24:23+10:00 +++ This post describes how I exported my 500+ YouTube subscriptions to an OPML @@ -77,7 +77,7 @@ which means I needed to determine the channel id for each page. To do that without futzing around with Google API keys and APIs I needed to download the HTML of each channel page. -To do that I generated a config file for `curl` from the JSON file: +First I generated a config file for `curl` from the JSON file: jaq --raw-output '.[] | (split("/") | last) as $name | "url \(.)\noutput \($name).html"' subscriptions.json > subscriptions.curl @@ -145,6 +145,12 @@ Let's break that down: - `TITLE`, `XML_URL`, and `URL` are escaped. - Finally we generate a JSON object with the title, URL, and RSS URL and write it into a `json` directory under the name of the channel. +**Update:** [Stephen pointed out on Mastodon][sedmonds] that the HTML contains the usual +`