forked from wezm/wezm.net
26 lines
867 B
Text
26 lines
867 B
Text
https://nitter.net/search?f=tweets&q=%23100binaries+from%3A%40wezm
|
|
|
|
For each page of results on Nitter:
|
|
|
|
tweets = []
|
|
document.querySelectorAll('.tweet-date a').forEach(a => tweets.push(a.href))
|
|
copy(tweets.join("\n"))
|
|
|
|
// Load next page of results and repeat
|
|
|
|
Paste into `tweets.txt`
|
|
|
|
With all URLs collected, `:s/nitter\.net/twitter.com/`
|
|
|
|
Fetch tweets JSON:
|
|
|
|
xargs -I '{url}' -a tweets.txt -n 1 curl https://api.twitter.com/1/statuses/oembed.json\?omit_script\=true\&dnt\=true\&lang\=en\&url\=\{url\} > tweets.json
|
|
|
|
Add commas between results: `%s/}{/},\r{/g`
|
|
|
|
Manually add `[` and `]` to top and bottom of file
|
|
|
|
Reverse and format with `jq`: `%!jq '.|reverse' -`. Would have been better to
|
|
have reversed `tweets.txt` but this avoids making another 100 HTTP requests.
|
|
|
|
Add a couple of missing tweets manually. Remove the half way through tweet.
|