45 lines
1.8 KiB
Markdown
Raw Normal View History

2025-01-18 04:44:26 +13:00
# reddit-lemmy-importer
turn json files downloaded from https://the-eye.eu/redarcs/ into lemmy comms :D
this is effectively https://github.com/mesmere/RedditLemmyImporter but in js and for a different type of archive
the posts/comments dump is read as a stream so handling bigger subreddits is less ram-intensive (though the final tree will still take up a good amount of ram so maybe create a big swapfile if processing large subreddits)
**You must create the community and user in lemmy before you run the SQL script, since the script grabs the corresponding IDs based off the names you give for the two.**
You can build the SQL script before making the comm/user though.
## TODO:
- set URL embed titles/descriptions and url_content type and embed_video_url in posts
- FIX ap_id!!!!!
- - this could be done by taking the federated url as an argument then updating the ap_id using [the url + /type/ + sql id from the post]
- do removal by self/mod in posts and comments properly
- maybe modify the downvotes in comment_aggregates when the score is negative (this depends if that column exists in production hexbear (i'm on some strange branch idk))
- - since right now it just changes the upvotes to be negative or whatever the score is
## references
https://github.com/mesmere/RedditLemmyImporter (basically stole the sql stuff from there)
https://www.w3schools.com/sql/
https://linux.die.net/man/1/psql
me kinda just messing with test posts to see what they look like in the db when you make a real post
https://github.com/hexbear-collective/lemmy/tree/hexbear-0.19.5
https://github.com/hexbear-collective/lemmy/blob/hexbear-0.19.5/crates/db_schema/src/schema.rs
https://www.geeksforgeeks.org/returning-in-postgresql/
https://tanishiking.github.io/posts/count-unicode-codepoint/
https://exploringjs.com/js/book/ch_unicode.html
https://www.reddit.com/dev/api/