Compare commits

..

28 Commits

Author SHA1 Message Date
4b1195f221 Update to v0.4.2 2021-03-12 13:48:43 -08:00
a472461ee8 Switch some links in Readme to spike.codes instance 2021-03-12 12:59:50 -08:00
baf5e3d7ee Fix Replit links 2021-03-12 12:55:02 -08:00
f209757ed6 Handle proxy unwraps 2021-03-12 12:21:02 -08:00
4173362ce1 Fix #148 2021-03-11 20:15:26 -08:00
b2ae5e486f Rename subreddit::page to subreddit::community 2021-03-10 21:43:06 -08:00
cda19a1912 Remove duplicate "description" meta tag for posts 2021-03-10 21:41:39 -08:00
f0b69f8a4a Update to v0.4 2021-03-10 20:51:08 -08:00
118ff9485c Document proxy.rs 2021-03-10 19:02:03 -08:00
4a51b7cfb0 Horizontally squish comments 2021-03-10 15:10:59 -08:00
f877face80 Update README.md 2021-03-10 20:56:33 +00:00
f0e8deb000 Add alt attribute to user icon 2021-03-10 11:29:36 -08:00
e70dfe2c0b Fix <video> size attributes 2021-03-10 10:49:18 -08:00
2e89a85858 Handle alternative status codes 2021-03-09 22:23:26 -08:00
e59b2b1346 Custom HTTP client with Rustls 2021-03-09 22:13:46 -08:00
1c36549134 Fix #146 2021-03-09 07:22:17 -08:00
5fb88d4744 Allow certain clippy lints 2021-03-08 19:22:10 -08:00
6c7188a1b9 Prevent pushing of Cargo.lock 2021-03-08 18:50:03 -08:00
84009fbb8e Remove Cargo.lock 2021-03-08 18:49:35 -08:00
bf783c2f3a Optimize type casting 2021-03-08 18:49:06 -08:00
213babb057 Update dependencies 2021-03-08 16:30:34 -08:00
7dbc02d930 Update himiko instances' location 2021-03-05 18:39:31 +00:00
10873dd0c6 Fix #144 2021-03-05 06:24:40 -08:00
c0d1519341 Update screenshot in README.md (#143)
* Update screenshot in README.md

New screenshot for v0.3.1. Also 35% lighter!

* Update screenshot

Co-authored-by: Spike <19519553+spikecodes@users.noreply.github.com>
2021-03-05 04:30:32 +00:00
8709c49f39 NGINX reverse proxy notice 2021-03-04 03:53:49 +00:00
56cfeba9e5 Update to v0.3.1 2021-03-03 09:35:40 -08:00
890d5ae625 Revert to curl-client 2021-03-03 09:24:31 -08:00
caa8f1d49e Test h1-client 2021-03-03 09:15:19 -08:00
16 changed files with 308 additions and 2248 deletions

1
.gitignore vendored
View File

@ -1 +1,2 @@
/target
Cargo.lock

2101
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@ -3,19 +3,19 @@ name = "libreddit"
description = " Alternative private front-end to Reddit"
license = "AGPL-3.0"
repository = "https://github.com/spikecodes/libreddit"
version = "0.3.0"
version = "0.4.2"
authors = ["spikecodes <19519553+spikecodes@users.noreply.github.com>"]
edition = "2018"
[dependencies]
tide = { version = "0.16.0", default-features = false, features = ["h1-server", "cookies"] }
async-std = { version = "1.9.0", features = ["attributes"] }
surf = { version = "2.2.0", default-features = false, features = ["h1-client-rustls", "encoding"] }
cached = "0.23.0"
askama = { version = "0.10.5", default-features = false }
serde = { version = "1.0.123", features = ["derive"] }
serde_json = "1.0.64"
async-recursion = "0.3.2"
regex = "1.4.3"
async-std = { version = "1.9.0", features = ["attributes"] }
async-tls = { version = "0.11.0", default-features = false, features = ["client"] }
cached = "0.23.0"
clap = { version = "2.33.3", default-features = false }
regex = "1.4.4"
serde = { version = "1.0.124", features = ["derive"] }
serde_json = "1.0.64"
tide = { version = "0.16.0", default-features = false, features = ["h1-server", "cookies"] }
time = "0.2.25"

View File

@ -2,11 +2,11 @@
> An alternative private front-end to Reddit
![screenshot](https://i.ibb.co/F0JsY5K/image.png)
![screenshot](https://i.ibb.co/74gZ4pd/libreddit-rust.png)
---
**10 second pitch:** Libreddit is a portmanteau of "libre" (meaning freedom) and "Reddit". It is a private front-end like [Invidious](https://github.com/iv-org/invidious) but for Reddit. Browse the coldest takes of [r/unpopularopinion](https://libredd.it/r/unpopularopinion) without being [tracked](#reddit).
**10 second pitch:** Libreddit is a portmanteau of "libre" (meaning freedom) and "Reddit". It is a private front-end like [Invidious](https://github.com/iv-org/invidious) but for Reddit. Browse the coldest takes of [r/unpopularopinion](https://libreddit.spike.codes/r/unpopularopinion) without being [tracked](#reddit).
- 🚀 Fast: written in Rust for blazing fast speeds and memory safety
- ☁️ Light: no JavaScript, no ads, no tracking, no bloat
@ -30,7 +30,7 @@
- [Docker](#2-docker)
- [AUR](#3-aur)
- [GitHub Releases](#4-github-releases)
- [Repl.it](#5-replit)
- [Replit](#5-replit)
- [Deployment](#deployment)
---
@ -43,13 +43,13 @@ Feel free to [open an issue](https://github.com/spikecodes/libreddit/issues/new)
|-|-|-|
| [libredd.it](https://libredd.it) (official) | 🇺🇸 US | |
| [libreddit.spike.codes](https://libreddit.spike.codes) (official) | 🇺🇸 US | |
| [libreddit.dothq.co](https://libreddit.dothq.co) | 🇺🇸 US | |
| [libreddit.dothq.co](https://libreddit.dothq.co) | 🇺🇸 US | |
| [libreddit.kavin.rocks](https://libreddit.kavin.rocks) | 🇮🇳 IN | ✅ |
| [libreddit.himiko.cloud](https://libreddit.himiko.cloud) | 🇧🇬 BG | |
| [libreddit.himiko.cloud](https://libreddit.himiko.cloud) | 🇫🇮 FI | |
| [libreddit.bcow.xyz](https://libreddit.bcow.xyz) | 🇺🇸 US | |
| [spjmllawtheisznfs7uryhxumin26ssv2draj7oope3ok3wuhy43eoyd.onion](http://spjmllawtheisznfs7uryhxumin26ssv2draj7oope3ok3wuhy43eoyd.onion) | 🇮🇳 IN | |
| [fwhhsbrbltmrct5hshrnqlqygqvcgmnek3cnka55zj4y7nuus5muwyyd.onion](http://fwhhsbrbltmrct5hshrnqlqygqvcgmnek3cnka55zj4y7nuus5muwyyd.onion) | 🇩🇪 DE | |
| [libreddit.himiko7xl2skojc6odi7hykl626gt4qki3vxdbv33u2u3af76d6k32ad.onion](http://libreddit.himiko7xl2skojc6odi7hykl626gt4qki3vxdbv33u2u3af76d6k32ad.onion) | 🇧🇬 BG | |
| [libreddit.himiko7xl2skojc6odi7hykl626gt4qki3vxdbv33u2u3af76d6k32ad.onion](http://libreddit.himiko7xl2skojc6odi7hykl626gt4qki3vxdbv33u2u3af76d6k32ad.onion) | 🇫🇮 FI | |
| [dflv6yjt7il3n3tggf4qhcmkzbti2ppytqx3o7pjrzwgntutpewscyid.onion](http://dflv6yjt7il3n3tggf4qhcmkzbti2ppytqx3o7pjrzwgntutpewscyid.onion/) | 🇺🇸 US | |
A checkmark in the "Cloudflare" category here refers to the use of the reverse proxy, [Cloudflare](https://cloudflare). The checkmark will not be listed for a site which uses Cloudflare DNS but rather the proxying service which grants Cloudflare the ability to monitor traffic to the website.
@ -137,9 +137,9 @@ For transparency, I hope to describe all the ways Libreddit handles user privacy
**DNS:** Both official domains (`libredd.it` and `libreddit.spike.codes`) use Cloudflare as the DNS resolver. Though, the sites are not proxied through Cloudflare meaning Cloudflare doesn't have access to user traffic.
**Cookies:** Libreddit uses optional cookies to store any configured settings in [the settings menu](https://libredd.it/settings). This is not a cross-site cookie and the cookie holds no personal data, only a value of the possible layout.
**Cookies:** Libreddit uses optional cookies to store any configured settings in [the settings menu](https://libreddit.spike.codes/settings). This is not a cross-site cookie and the cookie holds no personal data, only a value of the possible layout.
**Hosting:** The official instances are hosted on [Repl.it](https://repl.it/) which monitors usage to prevent abuse. I can understand if this invalidates certain users' threat models and therefore, selfhosting and browsing through Tor are welcomed.
**Hosting:** The official instances are hosted on [Replit](https://replit.com/) which monitors usage to prevent abuse. I can understand if this invalidates certain users' threat models and therefore, selfhosting and browsing through Tor are welcomed.
---
@ -177,15 +177,15 @@ yay -S libreddit-git
If you're on Linux and none of these methods work for you, you can grab a Linux binary from [the newest release](https://github.com/spikecodes/libreddit/releases/latest).
## 5) Repl.it
## 5) Replit
**Note:** Repl.it is a free option but they are *not* private and will monitor server usage to prevent abuse. If you need a free and easy setup, this method may work best for you.
**Note:** Replit is a free option but they are *not* private and will monitor server usage to prevent abuse. If you need a free and easy setup, this method may work best for you.
1. Create a Repl.it account (see note above)
2. Visit [the official Repl](https://repl.it/@spikethecoder/libreddit) and fork it
1. Create a Replit account (see note above)
2. Visit [the official Repl](https://replit.com/@spikethecoder/libreddit) and fork it
3. Hit the run button to download the latest Libreddit version and start it
In the web preview (defaults to top right), you should see your instance hosted where you can assign a [custom domain](https://docs.repl.it/repls/web-hosting#custom-domains).
In the web preview (defaults to top right), you should see your instance hosted where you can assign a [custom domain](https://docs.replit.com/repls/web-hosting#custom-domains).
---
@ -197,6 +197,14 @@ Once installed, deploy Libreddit to `0.0.0.0:8080` by running:
libreddit
```
## Proxying using NGINX
**NOTE** If you're [proxying Libreddit through a NGINX Reverse Proxy](https://github.com/spikecodes/libreddit/issues/122#issuecomment-782226853), add
```nginx
proxy_http_version 1.1;
```
to your NGINX configuration file above your `proxy_pass` line.
## Building
```

View File

@ -1,3 +1,14 @@
// Global specifiers
#![forbid(unsafe_code)]
#![warn(clippy::pedantic, clippy::all)]
#![allow(
clippy::needless_pass_by_value,
clippy::match_wildcard_for_single_variants,
clippy::cast_possible_truncation,
clippy::similar_names,
clippy::cast_possible_wrap
)]
// Reference local files
mod post;
mod proxy;
@ -48,10 +59,10 @@ impl<State: Clone + Send + Sync + 'static> Middleware<State> for NormalizePath {
if path.ends_with('/') {
Ok(next.run(request).await)
} else {
let normalized = if query != "" {
format!("{}/?{}", path.replace("//", "/"), query)
} else {
let normalized = if query.is_empty() {
format!("{}/", path.replace("//", "/"))
} else {
format!("{}/?{}", path.replace("//", "/"), query)
};
Ok(redirect(normalized))
}
@ -208,7 +219,7 @@ async fn main() -> tide::Result<()> {
app.at("/settings/restore/").get(settings::restore);
// Subreddit services
app.at("/r/:sub/").get(subreddit::page);
app.at("/r/:sub/").get(subreddit::community);
app.at("/r/:sub/subscribe/").post(subreddit::subscriptions);
app.at("/r/:sub/unsubscribe/").post(subreddit::subscriptions);
@ -224,10 +235,13 @@ async fn main() -> tide::Result<()> {
app.at("/r/:sub/w/").get(subreddit::wiki);
app.at("/r/:sub/w/:page/").get(subreddit::wiki);
app.at("/r/:sub/:sort/").get(subreddit::page);
app.at("/r/:sub/:sort/").get(subreddit::community);
// Comments handler
app.at("/comments/:id/").get(post::item);
// Front page
app.at("/").get(subreddit::page);
app.at("/").get(subreddit::community);
// View Reddit wiki
app.at("/w/").get(subreddit::wiki);
@ -244,7 +258,7 @@ async fn main() -> tide::Result<()> {
app.at("/:id/").get(|req: Request<()>| async {
match req.param("id") {
// Sort front page
Ok("best") | Ok("hot") | Ok("new") | Ok("top") | Ok("rising") | Ok("controversial") => subreddit::page(req).await,
Ok("best") | Ok("hot") | Ok("new") | Ok("top") | Ok("rising") | Ok("controversial") => subreddit::community(req).await,
// Short link for post
Ok(id) if id.len() > 4 && id.len() < 7 => post::item(req).await,
// Error message for unknown pages

View File

@ -1,5 +1,8 @@
// CRATES
use crate::utils::*;
use crate::esc;
use crate::utils::{
cookie, error, format_num, format_url, param, request, rewrite_urls, template, time, val, Author, Comment, Flags, Flair, FlairPart, Media, Post, Preferences,
};
use tide::Request;
use async_recursion::async_recursion;
@ -79,7 +82,7 @@ async fn parse_post(json: &serde_json::Value) -> Post {
// Build a post using data parsed from Reddit post API
Post {
id: val(post, "id"),
title: val(post, "title"),
title: esc!(post, "title"),
community: val(post, "subreddit"),
body: rewrite_urls(&val(post, "selftext_html")).replace("\\", ""),
author: Author {
@ -90,7 +93,7 @@ async fn parse_post(json: &serde_json::Value) -> Post {
post["data"]["author_flair_richtext"].as_array(),
post["data"]["author_flair_text"].as_str(),
),
text: val(post, "link_flair_text"),
text: esc!(post, "link_flair_text"),
background_color: val(post, "author_flair_background_color"),
foreground_color: val(post, "author_flair_text_color"),
},
@ -113,7 +116,7 @@ async fn parse_post(json: &serde_json::Value) -> Post {
post["data"]["link_flair_richtext"].as_array(),
post["data"]["link_flair_text"].as_str(),
),
text: val(post, "link_flair_text"),
text: esc!(post, "link_flair_text"),
background_color: val(post, "link_flair_background_color"),
foreground_color: if val(post, "link_flair_text_color") == "dark" {
"black".to_string()
@ -189,14 +192,14 @@ async fn parse_comments(json: &serde_json::Value, post_link: &str, post_author:
data["author_flair_richtext"].as_array(),
data["author_flair_text"].as_str(),
),
text: val(&comment, "link_flair_text"),
text: esc!(&comment, "link_flair_text"),
background_color: val(&comment, "author_flair_background_color"),
foreground_color: val(&comment, "author_flair_text_color"),
},
distinguished: val(&comment, "distinguished"),
},
score: if data["score_hidden"].as_bool().unwrap_or_default() {
"".to_string()
"\u{2022}".to_string()
} else {
format_num(score)
},

View File

@ -1,6 +1,8 @@
use surf::Body;
use tide::{Request, Response};
use async_std::{io, net::TcpStream, prelude::*};
use async_tls::TlsConnector;
use tide::{http::url::Url, Request, Response};
/// Handle tide routes to proxy by parsing `params` from `req`uest.
pub async fn handler(req: Request<()>, format: &str, params: Vec<&str>) -> tide::Result {
let mut url = format.to_string();
@ -12,21 +14,75 @@ pub async fn handler(req: Request<()>, format: &str, params: Vec<&str>) -> tide:
request(url).await
}
/// Sends a request to a Reddit media domain and proxy the response.
///
/// Relays the `Content-Length` and `Content-Type` header.
async fn request(url: String) -> tide::Result {
match surf::get(url).await {
Ok(res) => {
let content_length = res.header("Content-Length").map(|v| v.to_string()).unwrap_or_default();
let content_type = res.content_type().map(|m| m.to_string()).unwrap_or_default();
// Parse url into parts
let parts = Url::parse(&url)?;
let host = parts.host().map(|host| host.to_string()).unwrap_or_default();
let domain = parts.domain().unwrap_or_default();
let path = format!("{}?{}", parts.path(), parts.query().unwrap_or_default());
// Build reddit-compliant user agent for Libreddit
let user_agent = format!("web:libreddit:{}", env!("CARGO_PKG_VERSION"));
// Construct a request body
let req = format!(
"GET {} HTTP/1.1\r\nHost: {}\r\nAccept: */*\r\nConnection: close\r\nUser-Agent: {}\r\n\r\n",
path, host, user_agent
);
// Initialize TLS connector for requests
let connector = TlsConnector::default();
// Open a TCP connection
let tcp_stream = TcpStream::connect(format!("{}:443", domain)).await?;
// Use the connector to start the handshake process
let mut tls_stream = connector.connect(domain, tcp_stream).await?;
// Write the aforementioned HTTP request to the stream
tls_stream.write_all(req.as_bytes()).await?;
// And read the response
let mut writer = Vec::new();
io::copy(&mut tls_stream, &mut writer).await?;
// Find the delimiter which separates the body and headers
match (0..writer.len()).find(|i| writer[i.to_owned()] == 10_u8 && writer[i - 2] == 10_u8) {
Some(delim) => {
// Split the response into the body and headers
let split = writer.split_at(delim);
let headers_str = String::from_utf8_lossy(split.0);
let headers = headers_str.split("\r\n").collect::<Vec<&str>>();
let body = split.1[1..split.1.len()].to_vec();
// Parse the status code from the first header line
let status: u16 = headers[0].split(' ').collect::<Vec<&str>>()[1].parse().unwrap_or_default();
// Define a closure for easier header fetching
let header = |name: &str| {
headers
.iter()
.find(|x| x.starts_with(name))
.map(|f| f.split(": ").collect::<Vec<&str>>()[1])
.unwrap_or_default()
};
// Parse Content-Length and Content-Type from headers
let content_length = header("Content-Length");
let content_type = header("Content-Type");
// Build response
Ok(
Response::builder(res.status())
.body(Body::from_reader(res, None))
Response::builder(status)
.body(tide::http::Body::from_bytes(body))
.header("Cache-Control", "public, max-age=1209600, s-maxage=86400")
.header("Content-Length", content_length)
.header("Content-Type", content_type)
.build(),
)
}
Err(e) => Ok(Response::builder(503).body(e.to_string()).build()),
None => Ok(Response::builder(503).body("Couldn't parse media".to_string()).build()),
}
}

View File

@ -84,7 +84,7 @@ async fn search_subreddits(q: &str) -> Vec<Subreddit> {
name: val(subreddit, "display_name_prefixed"),
url: val(subreddit, "url"),
description: val(subreddit, "public_description"),
subscribers: subreddit["data"]["subscribers"].as_u64().unwrap_or_default() as i64,
subscribers: subreddit["data"]["subscribers"].as_f64().unwrap_or_default() as i64,
})
.collect::<Vec<Subreddit>>(),
_ => Vec::new(),

View File

@ -13,7 +13,7 @@ struct SettingsTemplate {
#[derive(serde::Deserialize, Default)]
#[serde(default)]
pub struct SettingsForm {
pub struct Form {
theme: Option<String>,
front_page: Option<String>,
layout: Option<String>,
@ -33,7 +33,7 @@ pub async fn get(req: Request<()>) -> tide::Result {
// Set cookies using response "Set-Cookie" header
pub async fn set(mut req: Request<()>) -> tide::Result {
let form: SettingsForm = req.body_form().await.unwrap_or_default();
let form: Form = req.body_form().await.unwrap_or_default();
let mut res = redirect("/settings".to_string());
@ -58,7 +58,7 @@ pub async fn set(mut req: Request<()>) -> tide::Result {
// Set cookies using response "Set-Cookie" header
pub async fn restore(req: Request<()>) -> tide::Result {
let form: SettingsForm = req.query()?;
let form: Form = req.query()?;
let path = match form.redirect {
Some(value) => format!("/{}/", value),

View File

@ -1,5 +1,6 @@
// CRATES
use crate::utils::*;
use crate::esc;
use crate::utils::{cookie, error, format_num, format_url, param, redirect, request, rewrite_urls, template, val, Post, Preferences, Subreddit};
use askama::Template;
use tide::{http::Cookie, Request};
use time::{Duration, OffsetDateTime};
@ -25,7 +26,7 @@ struct WikiTemplate {
}
// SERVICES
pub async fn page(req: Request<()>) -> tide::Result {
pub async fn community(req: Request<()>) -> tide::Result {
// Build Reddit API path
let subscribed = cookie(&req, "subscriptions");
let front_page = cookie(&req, "front_page");
@ -108,10 +109,10 @@ pub async fn subscriptions(req: Request<()>) -> tide::Result {
// Redirect back to subreddit
// check for redirect parameter if unsubscribing from outside sidebar
let redirect_path = param(&format!("/?{}", query), "redirect");
let path = if !redirect_path.is_empty() {
format!("/{}/", redirect_path)
} else {
let path = if redirect_path.is_empty() {
format!("/r/{}", sub)
} else {
format!("/{}/", redirect_path)
};
let mut res = redirect(path);
@ -139,9 +140,9 @@ pub async fn wiki(req: Request<()>) -> tide::Result {
let path: String = format!("/r/{}/wiki/{}.json?raw_json=1", sub, page);
match request(path).await {
Ok(res) => template(WikiTemplate {
Ok(response) => template(WikiTemplate {
sub,
wiki: rewrite_urls(res["data"]["content_html"].as_str().unwrap_or_default()),
wiki: rewrite_urls(response["data"]["content_html"].as_str().unwrap_or_default()),
page,
prefs: Preferences::new(req),
}),
@ -167,9 +168,9 @@ async fn subreddit(sub: &str) -> Result<Subreddit, String> {
let icon = if community_icon.is_empty() { val(&res, "icon_img") } else { community_icon.to_string() };
let sub = Subreddit {
name: val(&res, "display_name"),
title: val(&res, "title"),
description: val(&res, "public_description"),
name: esc!(&res, "display_name"),
title: esc!(&res, "title"),
description: esc!(&res, "public_description"),
info: rewrite_urls(&val(&res, "description_html").replace("\\", "")),
icon: format_url(&icon),
members: format_num(members),

View File

@ -1,5 +1,6 @@
// CRATES
use crate::utils::*;
use crate::esc;
use crate::utils::{error, format_url, param, request, template, Post, Preferences, User};
use askama::Template;
use tide::Request;
use time::OffsetDateTime;
@ -57,17 +58,17 @@ async fn user(name: &str) -> Result<User, String> {
// Grab creation date as unix timestamp
let created: i64 = res["data"]["created"].as_f64().unwrap_or(0.0).round() as i64;
// nested_val function used to parse JSON from Reddit APIs
// Closure used to parse JSON from Reddit APIs
let about = |item| res["data"]["subreddit"][item].as_str().unwrap_or_default().to_string();
// Parse the JSON output into a User struct
Ok(User {
name: name.to_string(),
title: about("title"),
title: esc!(about("title")),
icon: format_url(&about("icon_img")),
karma: res["data"]["total_karma"].as_i64().unwrap_or(0),
created: OffsetDateTime::from_unix_timestamp(created).format("%b %d '%y"),
banner: about("banner_img"),
banner: esc!(about("banner_img")),
description: about("public_description"),
})
}

View File

@ -1,7 +1,11 @@
//
// CRATES
//
use crate::esc;
use askama::Template;
use async_recursion::async_recursion;
use async_std::{io, net::TcpStream, prelude::*};
use async_tls::TlsConnector;
use cached::proc_macro::cached;
use regex::Regex;
use serde_json::{from_str, Error, Value};
@ -123,7 +127,7 @@ impl Media {
let url = if post_type == "self" || post_type == "link" {
url_val.as_str().unwrap_or_default().to_string()
} else {
format_url(url_val.as_str().unwrap_or_default()).to_string()
format_url(url_val.as_str().unwrap_or_default())
};
(
@ -224,14 +228,14 @@ impl Post {
let (rel_time, created) = time(data["created_utc"].as_f64().unwrap_or_default());
let score = data["score"].as_i64().unwrap_or_default();
let ratio: f64 = data["upvote_ratio"].as_f64().unwrap_or(1.0) * 100.0;
let title = val(post, "title");
let title = esc!(post, "title");
// Determine the type of media along with the media URL
let (post_type, media, gallery) = Media::parse(&data).await;
posts.push(Self {
id: val(post, "id"),
title: if title.is_empty() { fallback_title.to_owned() } else { title },
title: esc!(if title.is_empty() { fallback_title.to_owned() } else { title }),
community: val(post, "subreddit"),
body: rewrite_urls(&val(post, "body_html")),
author: Author {
@ -242,14 +246,14 @@ impl Post {
data["author_flair_richtext"].as_array(),
data["author_flair_text"].as_str(),
),
text: val(post, "link_flair_text"),
text: esc!(post, "link_flair_text"),
background_color: val(post, "author_flair_background_color"),
foreground_color: val(post, "author_flair_text_color"),
},
distinguished: val(post, "distinguished"),
},
score: if data["hide_score"].as_bool().unwrap_or_default() {
"".to_string()
"\u{2022}".to_string()
} else {
format_num(score)
},
@ -269,7 +273,7 @@ impl Post {
data["link_flair_richtext"].as_array(),
data["link_flair_text"].as_str(),
),
text: val(post, "link_flair_text"),
text: esc!(post, "link_flair_text"),
background_color: val(post, "link_flair_background_color"),
foreground_color: if val(post, "link_flair_text_color") == "dark" {
"black".to_string()
@ -408,10 +412,10 @@ pub fn format_url(url: &str) -> String {
Ok(parsed) => {
let domain = parsed.domain().unwrap_or_default();
let capture = |regex: &str, format: &str, levels: i16| {
let capture = |regex: &str, format: &str, segments: i16| {
Regex::new(regex)
.map(|re| match re.captures(url) {
Some(caps) => match levels {
Some(caps) => match segments {
1 => [format, &caps[1], "/"].join(""),
2 => [format, &caps[1], "/", &caps[2], "/"].join(""),
_ => String::new(),
@ -483,6 +487,27 @@ pub fn val(j: &Value, k: &str) -> String {
j["data"][k].as_str().unwrap_or_default().to_string()
}
#[macro_export]
macro_rules! esc {
($f:expr) => {
$f.replace('<', "&lt;").replace('>', "&gt;")
};
($j:expr, $k:expr) => {
$j["data"][$k].as_str().unwrap_or_default().to_string().replace('<', "&lt;").replace('>', "&gt;")
};
}
// Escape < and > to accurately render HTML
// pub fn esc(j: &Value, k: &str) -> String {
// val(j,k)
// // .replace('&', "&amp;")
// .replace('<', "&lt;")
// .replace('>', "&gt;")
// // .replace('"', "&quot;")
// // .replace('\'', "&#x27;")
// // .replace('/', "&#x2f;")
// }
//
// NETWORKING
//
@ -510,54 +535,90 @@ pub async fn error(req: Request<()>, msg: String) -> tide::Result {
Ok(Response::builder(404).content_type("text/html").body(body).build())
}
#[async_recursion]
async fn connect(path: String) -> io::Result<String> {
// Build reddit-compliant user agent for Libreddit
let user_agent = format!("web:libreddit:{}", env!("CARGO_PKG_VERSION"));
// Construct an HTTP request body
let req = format!(
"GET {} HTTP/1.1\r\nHost: www.reddit.com\r\nAccept: */*\r\nConnection: close\r\nUser-Agent: {}\r\n\r\n",
path, user_agent
);
// Open a TCP connection
let tcp_stream = TcpStream::connect("www.reddit.com:443").await?;
// Initialize TLS connector for requests
let connector = TlsConnector::default();
// Use the connector to start the handshake process
let mut tls_stream = connector.connect("www.reddit.com", tcp_stream).await?;
// Write the crafted HTTP request to the stream
tls_stream.write_all(req.as_bytes()).await?;
// And read the response
let mut writer = Vec::new();
io::copy(&mut tls_stream, &mut writer).await?;
let response = String::from_utf8_lossy(&writer).to_string();
let split = response.split("\r\n\r\n").collect::<Vec<&str>>();
let headers = split[0].split("\r\n").collect::<Vec<&str>>();
let status: i16 = headers[0].split(' ').collect::<Vec<&str>>()[1].parse().unwrap_or(200);
let body = split[1].to_string();
if (300..400).contains(&status) {
let location = headers
.iter()
.find(|header| header.starts_with("location:"))
.map(|f| f.to_owned())
.unwrap_or_default()
.split(": ")
.collect::<Vec<&str>>()[1];
connect(location.replace("https://www.reddit.com", "")).await
} else {
Ok(body)
}
}
// Make a request to a Reddit API and parse the JSON response
#[cached(size = 100, time = 30, result = true)]
pub async fn request(path: String) -> Result<Value, String> {
let url = format!("https://www.reddit.com{}", path);
// Build reddit-compliant user agent for Libreddit
let user_agent = format!("web:libreddit:{}", env!("CARGO_PKG_VERSION"));
// Send request using surf
let req = surf::get(&url).header("User-Agent", user_agent.as_str());
let client = surf::client().with(surf::middleware::Redirect::new(5));
let res = client.send(req).await;
let err = |msg: &str, e: String| -> Result<Value, String> {
eprintln!("{} - {}: {}", url, msg, e);
Err(msg.to_string())
};
match res {
Ok(mut response) => match response.take_body().into_string().await {
// If response is success
Ok(body) => {
// Parse the response from Reddit as JSON
let parsed: Result<Value, Error> = from_str(&body);
match parsed {
Ok(json) => {
// If Reddit returned an error
if json["error"].is_i64() {
Err(
json["reason"]
.as_str()
.unwrap_or_else(|| {
json["message"].as_str().unwrap_or_else(|| {
eprintln!("{} - Error parsing reddit error", url);
"Error parsing reddit error"
})
match connect(path).await {
Ok(body) => {
// Parse the response from Reddit as JSON
let parsed: Result<Value, Error> = from_str(&body);
match parsed {
Ok(json) => {
// If Reddit returned an error
if json["error"].is_i64() {
Err(
json["reason"]
.as_str()
.unwrap_or_else(|| {
json["message"].as_str().unwrap_or_else(|| {
eprintln!("{} - Error parsing reddit error", url);
"Error parsing reddit error"
})
.to_string(),
)
} else {
Ok(json)
}
})
.to_string(),
)
} else {
Ok(json)
}
Err(e) => err("Failed to parse page JSON data", e.to_string()),
}
Err(e) => err("Failed to parse page JSON data", e.to_string()),
}
Err(e) => err("Couldn't parse request body", e.to_string()),
},
}
Err(e) => err("Couldn't send request to Reddit", e.to_string()),
}
}

View File

@ -1115,46 +1115,6 @@ td, th {
/* Mobile */
@media screen and (max-width: 480px) {
#version { display: none; }
.post {
grid-template: "post_header post_header post_thumbnail" auto
"post_title post_title post_thumbnail" 1fr
"post_media post_media post_thumbnail" auto
"post_body post_body post_thumbnail" auto
"post_score post_footer post_thumbnail" auto
/ auto 1fr fit-content(min(20%, 152px));
}
.post_score {
margin: 5px 0px 20px 15px;
padding: 0;
}
.compact .post_score { padding: 0; }
.post_score::before { content: "↑" }
.post_header { font-size: 14px; }
.post_footer { margin-left: 15px; }
.replies > .comment {
margin-left: -25px;
padding: 5px 0;
}
.comment_left {
min-width: 45px;
padding: 5px 0px;
}
.comment_author { margin-left: 10px; }
.comment_score { min-width: 35px; }
.comment_data::marker { font-size: 18px; }
.created { width: 100%; }
}
@media screen and (max-width: 800px) {
body { padding-top: 120px }
@ -1196,3 +1156,60 @@ td, th {
#logo, #links { margin-bottom: 5px; }
#searchbox { width: calc(100vw - 35px); }
}
@media screen and (max-width: 480px) {
body { padding-top: 100px; }
#version { display: none; }
.post {
grid-template: "post_header post_header post_thumbnail" auto
"post_title post_title post_thumbnail" 1fr
"post_media post_media post_thumbnail" auto
"post_body post_body post_thumbnail" auto
"post_score post_footer post_thumbnail" auto
/ auto 1fr fit-content(min(20%, 152px));
}
.post_score {
margin: 5px 0px 20px 15px;
padding: 0;
}
.compact .post_score { padding: 0; }
.post_score::before { content: "↑" }
.post_header { font-size: 14px; }
.post_footer { margin-left: 15px; }
.replies > .comment {
margin-left: -12px;
padding: 5px 0;
}
.comment_left {
min-width: auto;
padding: 5px 0px;
align-items: initial;
margin-top: -5px;
}
.line {
margin-left: 5px;
}
/* .thread { margin-left: -5px; } */
.comment_right { padding: 5px 0 10px 2px; }
.comment_author { margin-left: 5px; }
.comment_data { margin-left: 12px; }
.comment_data::marker { font-size: 22px; }
.created { width: 100%; }
.comment_score {
min-width: 32px;
height: 20px;
font-size: 15px;
padding: 7px 0px;
margin-right: -5px;
}
}

View File

@ -13,7 +13,6 @@
<!-- Meta Tags -->
<meta name="author" content="u/{{ post.author.name }}">
<meta name="title" content="{{ post.title }} - r/{{ post.community }}">
<meta name="description" content="View on Libreddit, an alternative private front-end to Reddit.">
<meta property="og:type" content="website">
<meta property="og:url" content="{{ post.permalink }}">
<meta property="og:title" content="{{ post.title }} - r/{{ post.community }}">

View File

@ -65,7 +65,7 @@
</div>
<aside>
<div class="panel" id="user">
<img id="user_icon" src="{{ user.icon }}">
<img id="user_icon" src="{{ user.icon }}" alt="User icon">
<p id="user_title">{{ user.title }}</p>
<p id="user_name">u/{{ user.name }}</p>
<div id="user_description">{{ user.description }}</div>

View File

@ -86,9 +86,9 @@
</svg>
</a>
{% else if (prefs.layout.is_empty() || prefs.layout == "card") && post.post_type == "gif" %}
<video class="post_media_video short" src="{{ post.media.url }}" width="{{ post.media.width }}px" height="{{ post.media.height }}px" controls loop autoplay><a href={{ post.media.url }}>Video</a></video>
<video class="post_media_video short" src="{{ post.media.url }}" width="{{ post.media.width }}" height="{{ post.media.height }}" controls loop autoplay><a href={{ post.media.url }}>Video</a></video>
{% else if (prefs.layout.is_empty() || prefs.layout == "card") && post.post_type == "video" %}
<video class="post_media_video short" src="{{ post.media.url }}" width="{{ post.media.width }}px" height="{{ post.media.height }}px" poster="{{ post.media.poster }}" preload="none" controls autoplay><a href={{ post.media.url }}>Video</a></video>
<video class="post_media_video short" src="{{ post.media.url }}" width="{{ post.media.width }}" height="{{ post.media.height }}" poster="{{ post.media.poster }}" preload="none" controls autoplay><a href={{ post.media.url }}>Video</a></video>
{% else if post.post_type != "self" %}
<a class="post_thumbnail {% if post.thumbnail.url.is_empty() %}no_thumbnail{% endif %}" href="{% if post.post_type == "link" %}{{ post.media.url }}{% else %}{{ post.permalink }}{% endif %}">
{% if post.thumbnail.url.is_empty() %}