add: articles
This commit is contained in:
parent
02c8862a89
commit
e72e15ec97
BIN
assets/img/author_transparent.png
(Stored with Git LFS)
BIN
assets/img/author_transparent.png
(Stored with Git LFS)
Binary file not shown.
@ -25,6 +25,18 @@
|
||||
pageRef = "categories"
|
||||
weight = 20
|
||||
|
||||
[[main]]
|
||||
name = "Homelab"
|
||||
parent = "Categories"
|
||||
pageRef = "tags/homelab"
|
||||
weight = 10
|
||||
|
||||
[[main]]
|
||||
name = "Tools"
|
||||
parent = "Categories"
|
||||
pageRef = "tags/tools"
|
||||
weight = 10
|
||||
|
||||
[[main]]
|
||||
identifier = "github"
|
||||
pre = "github"
|
||||
|
BIN
content/posts/authelia-selfhosted-sso/featured.png
(Stored with Git LFS)
Normal file
BIN
content/posts/authelia-selfhosted-sso/featured.png
(Stored with Git LFS)
Normal file
Binary file not shown.
BIN
content/posts/authelia-selfhosted-sso/img/image-1.png
(Stored with Git LFS)
Normal file
BIN
content/posts/authelia-selfhosted-sso/img/image-1.png
(Stored with Git LFS)
Normal file
Binary file not shown.
BIN
content/posts/authelia-selfhosted-sso/img/image-2.png
(Stored with Git LFS)
Normal file
BIN
content/posts/authelia-selfhosted-sso/img/image-2.png
(Stored with Git LFS)
Normal file
Binary file not shown.
BIN
content/posts/authelia-selfhosted-sso/img/image-3.png
(Stored with Git LFS)
Normal file
BIN
content/posts/authelia-selfhosted-sso/img/image-3.png
(Stored with Git LFS)
Normal file
Binary file not shown.
BIN
content/posts/authelia-selfhosted-sso/img/image-4.png
(Stored with Git LFS)
Normal file
BIN
content/posts/authelia-selfhosted-sso/img/image-4.png
(Stored with Git LFS)
Normal file
Binary file not shown.
BIN
content/posts/authelia-selfhosted-sso/img/image-5.png
(Stored with Git LFS)
Normal file
BIN
content/posts/authelia-selfhosted-sso/img/image-5.png
(Stored with Git LFS)
Normal file
Binary file not shown.
297
content/posts/authelia-selfhosted-sso/index.md
Normal file
297
content/posts/authelia-selfhosted-sso/index.md
Normal file
@ -0,0 +1,297 @@
|
||||
---
|
||||
title: "Authelia : a selfhosted SSO"
|
||||
date: 2022-04-10
|
||||
draft: false
|
||||
slug: "authelia-selfhosted-sso"
|
||||
tags: ["tools", "sso"]
|
||||
---
|
||||
|
||||
First of all, what is an SSO?
|
||||
|
||||
> Single sign-on (SSO) is an authentication scheme that allows a user to log in with a single ID to any of several related, yet independent, software systems.
|
||||
|
||||
A SSO allows you to authenticate once and have access to several applications. But in the context of a Homelab, what is the point ?
|
||||
|
||||
There are several advantages :
|
||||
- Secure services to make them accessible from the outside
|
||||
- Simplify the authentication process for access to multiple services
|
||||
- Add double authentification
|
||||
- Login regulation (Anti brute force)
|
||||
|
||||
---
|
||||
|
||||
Now that we have seen what an SSO is and the interest it can have in a Homelab, let's start the installation!
|
||||
|
||||
In my case I chose to use [Authelia](https://www.authelia.com), it's an opensource SSO with interesting features like double authentication.
|
||||
|
||||
First of all, let's start by installing and configuring the docker. For that you can follow the tuto available on Authelia's website : [Getting Started](https://www.authelia.com/docs/getting-started.html).
|
||||
|
||||
In my case I use Unraid and a template is directly available. I just have to set the port to use for the web interface.
|
||||
|
||||
<img class="thumbnailshadow" src="img/image-1.png"/>
|
||||
|
||||
Before launching the docker we will have to make several changes to the configuration file.
|
||||
|
||||
Before starting you must have the following elements:
|
||||
- A subdomain for the authentication portal (ex. auth.youdomain.com)
|
||||
- A [Redis](https://hub.docker.com/_/redis) docker
|
||||
- A Mysql database (mariaDB, ...)
|
||||
- A reverse proxy ([Nginx Proxy Manager](https://nginxproxymanager.com), [Traefik](https://traefik.io), ...)
|
||||
- [Optional] - SNMP credencial for mail notifications
|
||||
- [Optional] - Duo account for push authentication
|
||||
|
||||
Now that all is ready we can move to the configuration of Authelia!
|
||||
|
||||
## Configuration
|
||||
You can start by downloading my Configuration Template. This will allow you to follow the rest of the guide more easily and save time.Configuration Template
|
||||
|
||||
|
||||
Let's start by generating the secret JWT. For that I use this [site](https://www.javainuse.com/jwtgenerator).
|
||||
|
||||
```
|
||||
jwt_secret: [example_secret]
|
||||
```
|
||||
|
||||
## Session
|
||||
Several elements must be configured in this part:
|
||||
- **secret** : generate a password with the site of your choice
|
||||
- **domain** : indicate your domain (`ex. youdomain.com`)
|
||||
- **redis** : indicate the ip, the port and the password of your Redis docker
|
||||
|
||||
```
|
||||
session:
|
||||
name: authelia_session
|
||||
secret: [example_secret]
|
||||
expiration: 4h
|
||||
inactivity: 30m
|
||||
remember_me_duration: 12h
|
||||
domain: example.org
|
||||
|
||||
redis:
|
||||
host: [redis_ip]
|
||||
port: [redis_port]
|
||||
password: [redis_password]
|
||||
database_index: 0
|
||||
maximum_active_connections: 8
|
||||
minimum_idle_connections: 0
|
||||
```
|
||||
|
||||
## User
|
||||
It is in this part that you will have to add the users who will be able to connect to the SSO. No element has to be modified in the configuration file, but you will have to create a second file: `users_database.yml`. This file is composed as follows:
|
||||
|
||||
```
|
||||
users:
|
||||
user1:
|
||||
password:
|
||||
displayname: User1
|
||||
email: User1s@gmail.com
|
||||
groups:
|
||||
- user
|
||||
user2:
|
||||
password:
|
||||
displayname: User2
|
||||
email: User2s@gmail.com
|
||||
groups:
|
||||
- user
|
||||
[...]
|
||||
```
|
||||
|
||||
To create the user password hash you can use the following command:
|
||||
|
||||
```
|
||||
docker run --rm authelia/authelia:latest authelia hash-password 'yourpassword'
|
||||
```
|
||||
|
||||
If this does not work you can manually create the hash using this [site](https://argon2.online) and these parameters:
|
||||
|
||||
<img class="thumbnailshadow" src="img/image-2.png"/>
|
||||
|
||||
## Access control
|
||||
For the access policy we will do something simple with a single role. But nothing prevents you from creating several roles with different rights and access.
|
||||
So we are going to create a `user` group that will have to use the double authentication and will have access to all the subdomains. For this you have to modify the two `exemple.org` in the template file.
|
||||
|
||||
```
|
||||
access_control:
|
||||
default_policy: deny
|
||||
rules:
|
||||
- domain:
|
||||
- [example.org]
|
||||
- "[*.example.org]"
|
||||
subject: "group:user"
|
||||
policy: two_factor
|
||||
```
|
||||
|
||||
## Storage
|
||||
For the storage part we will use our database created with Mysql. You have to modify the following values in the template:
|
||||
- **encryption_key** : generate a password with the site of your choice
|
||||
- **mysql** : indicate ip, port, database name, user and mysql password
|
||||
|
||||
```
|
||||
storage:
|
||||
encryption_key: [example_secret]
|
||||
mysql:
|
||||
host: [mysql_ip]
|
||||
port: [mysql_port]
|
||||
database: [mysql_database_name]
|
||||
username: [mysql_user]
|
||||
password: [mysql_password]
|
||||
```
|
||||
|
||||
## Regulation
|
||||
For the anti brute force security, we will set up a regulation policy with 3 attempts maximum in 2 min before a ban of 5 min. This regulation is not the ultimate solution, it is always preferable to also set up a Fail2ban to complete this solution.
|
||||
|
||||
```
|
||||
regulation:
|
||||
max_retries: 3
|
||||
find_time: 2m
|
||||
ban_time: 5m
|
||||
```
|
||||
|
||||
## Notification
|
||||
To set up email notifications, you must fill in the following part of the template. This part is not mandatory but it is used to reset your password in case you forget...
|
||||
|
||||
```
|
||||
smtp:
|
||||
username: [example@example.org]
|
||||
password: [smtp_password]
|
||||
host: [smtp_host]
|
||||
port: [smtp_port]
|
||||
sender: [example@example.org]
|
||||
identifier: localhost
|
||||
subject: "[Authelia] {title}"
|
||||
startup_check_address: test@authelia.com
|
||||
disable_require_tls: true
|
||||
disable_html_emails: false
|
||||
```
|
||||
|
||||
## Double Authentification
|
||||
For the double authentication several solutions are offered to you. The two most interesting celons me are the following:
|
||||
TOTP
|
||||
This is the most famous method, you just have to use an application like Google Auth to get a code that changes every 30 seconds. The configuration is very simple, just customize the `issuer` field with the name you want. This name will be displayed in your
|
||||
|
||||
### TOTP
|
||||
```
|
||||
application.totp:
|
||||
issuer: [example.example.org]
|
||||
period: 30
|
||||
skew: 1
|
||||
```
|
||||
|
||||
### Duo Push Notification
|
||||
This is the most practical method, when you connect to your SSO, you will receive a notification that will allow you to validate your access. The configuration requires 3 elements that we will create directly on the [Duo](https://duo.com) website. In the Applications tab, click on Protect an Application and look for Partner Auth API. You can now retrieve the following elements and enter them in the Authelia configuration:
|
||||
- **Integration key**
|
||||
- **Secret Key**
|
||||
- **API hostname**
|
||||
|
||||
```
|
||||
duo_api:
|
||||
hostname: [duo_hostname]
|
||||
integration_key: [duo_integration_key]
|
||||
secret_key: [duo_secret_key]
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
You can now start the Authelia docker, if all goes well there should be no errors. If this is the case you can go on, if not you can correct the problems until there are no more. To solve the problems you can use the Authelia [documentation](https://www.authelia.com/docs), but also their [github](https://github.com/authelia/authelia).
|
||||
|
||||
## Nginx Proxy Manager
|
||||
First of all we will have to set up our Authelia subdomain so that it is accessible from the outside, but also protected with certificates. To do this, create an entry in your reverse proxy and add your subdomain, see:
|
||||
|
||||
{{< article link="/posts/how-to-host-multiple-services-on-one-public-ip/" >}}
|
||||
|
||||
Now that this is done, we can add our SSO to the different subdomain we want to protect. To do this you just need to change the [authelia_IP] in the following code:
|
||||
|
||||
```
|
||||
location /authelia {
|
||||
internal;
|
||||
set $upstream_authelia http://[authelia_IP]:9091/api/verify;
|
||||
proxy_pass_request_body off;
|
||||
proxy_pass $upstream_authelia;
|
||||
proxy_set_header Content-Length "";
|
||||
|
||||
# Timeout if the real server is dead
|
||||
proxy_next_upstream error timeout invalid_header http_500 http_502 http_503;
|
||||
client_body_buffer_size 128k;
|
||||
proxy_set_header Host $host;
|
||||
proxy_set_header X-Original-URL $scheme://$http_host$request_uri;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
proxy_set_header X-Forwarded-For $remote_addr;
|
||||
proxy_set_header X-Forwarded-Proto $scheme;
|
||||
proxy_set_header X-Forwarded-Host $http_host;
|
||||
proxy_set_header X-Forwarded-Uri $request_uri;
|
||||
proxy_set_header X-Forwarded-Ssl on;
|
||||
proxy_redirect http:// $scheme://;
|
||||
proxy_http_version 1.1;
|
||||
proxy_set_header Connection "";
|
||||
proxy_cache_bypass $cookie_session;
|
||||
proxy_no_cache $cookie_session;
|
||||
proxy_buffers 4 32k;
|
||||
|
||||
send_timeout 5m;
|
||||
proxy_read_timeout 240;
|
||||
proxy_send_timeout 240;
|
||||
proxy_connect_timeout 240;
|
||||
}
|
||||
|
||||
location / {
|
||||
set $upstream_app $forward_scheme://$server:$port;
|
||||
proxy_pass $upstream_app;
|
||||
|
||||
auth_request /authelia;
|
||||
auth_request_set $target_url https://$http_host$request_uri;
|
||||
auth_request_set $user $upstream_http_remote_user;
|
||||
auth_request_set $email $upstream_http_remote_email;
|
||||
auth_request_set $groups $upstream_http_remote_groups;
|
||||
proxy_set_header Remote-User $user;
|
||||
proxy_set_header Remote-Email $email;
|
||||
proxy_set_header Remote-Groups $groups;
|
||||
|
||||
error_page 401 =302 https://[auth_domain]/?rd=$target_url;
|
||||
|
||||
client_body_buffer_size 128k;
|
||||
|
||||
proxy_next_upstream error timeout invalid_header http_500 http_502 http_503;
|
||||
|
||||
send_timeout 5m;
|
||||
proxy_read_timeout 360;
|
||||
proxy_send_timeout 360;
|
||||
proxy_connect_timeout 360;
|
||||
|
||||
proxy_set_header Host $host;
|
||||
proxy_set_header Upgrade $http_upgrade;
|
||||
proxy_set_header Connection $http_connection;
|
||||
proxy_set_header Accept-Encoding gzip;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_set_header X-Forwarded-Proto $scheme;
|
||||
proxy_set_header X-Forwarded-Host $http_host;
|
||||
proxy_set_header X-Forwarded-Uri $request_uri;
|
||||
proxy_set_header X-Forwarded-Ssl on;
|
||||
proxy_redirect http:// $scheme://;
|
||||
proxy_http_version 1.1;
|
||||
proxy_set_header Connection "";
|
||||
proxy_cache_bypass $cookie_session;
|
||||
proxy_no_cache $cookie_session;
|
||||
proxy_buffers 64 256k;
|
||||
|
||||
set_real_ip_from 172.18.0.0/16;
|
||||
set_real_ip_from 172.19.0.0/16;
|
||||
real_ip_header CF-Connecting-IP;
|
||||
real_ip_recursive on;
|
||||
|
||||
}
|
||||
```
|
||||
|
||||
Then you can add it in the `Advanced` tab of the desired subdomain:
|
||||
|
||||
<img class="thumbnailshadow" src="img/image-3.png"/>
|
||||
|
||||
After saving, go to the address of your subdomain to verify that it works. You should arrive on the following page:
|
||||
|
||||
<img class="thumbnailshadow" src="img/image-4.png"/>
|
||||
|
||||
You can connect with one of the credencials you created in the `users_database.yml` file. Once the connection is done, you should be redirected to the application hosted on the subdomain!
|
||||
|
||||
To configure/modify the double authentication method, go to the subdomain you have configured for Authelia (`ex. auth.youdomain.com`). Then select `Methods`:
|
||||
|
||||
<img class="thumbnailshadow" src="img/image-5.png"/>
|
BIN
content/posts/how-to-host-multiple-services-on-one-public-ip/featured.png
(Stored with Git LFS)
Normal file
BIN
content/posts/how-to-host-multiple-services-on-one-public-ip/featured.png
(Stored with Git LFS)
Normal file
Binary file not shown.
BIN
content/posts/how-to-host-multiple-services-on-one-public-ip/img/image-1.png
(Stored with Git LFS)
Normal file
BIN
content/posts/how-to-host-multiple-services-on-one-public-ip/img/image-1.png
(Stored with Git LFS)
Normal file
Binary file not shown.
BIN
content/posts/how-to-host-multiple-services-on-one-public-ip/img/image-2.png
(Stored with Git LFS)
Normal file
BIN
content/posts/how-to-host-multiple-services-on-one-public-ip/img/image-2.png
(Stored with Git LFS)
Normal file
Binary file not shown.
BIN
content/posts/how-to-host-multiple-services-on-one-public-ip/img/image-3.png
(Stored with Git LFS)
Normal file
BIN
content/posts/how-to-host-multiple-services-on-one-public-ip/img/image-3.png
(Stored with Git LFS)
Normal file
Binary file not shown.
BIN
content/posts/how-to-host-multiple-services-on-one-public-ip/img/image-4.png
(Stored with Git LFS)
Normal file
BIN
content/posts/how-to-host-multiple-services-on-one-public-ip/img/image-4.png
(Stored with Git LFS)
Normal file
Binary file not shown.
BIN
content/posts/how-to-host-multiple-services-on-one-public-ip/img/image-5.png
(Stored with Git LFS)
Normal file
BIN
content/posts/how-to-host-multiple-services-on-one-public-ip/img/image-5.png
(Stored with Git LFS)
Normal file
Binary file not shown.
BIN
content/posts/how-to-host-multiple-services-on-one-public-ip/img/image-6.png
(Stored with Git LFS)
Normal file
BIN
content/posts/how-to-host-multiple-services-on-one-public-ip/img/image-6.png
(Stored with Git LFS)
Normal file
Binary file not shown.
BIN
content/posts/how-to-host-multiple-services-on-one-public-ip/img/image-7.png
(Stored with Git LFS)
Normal file
BIN
content/posts/how-to-host-multiple-services-on-one-public-ip/img/image-7.png
(Stored with Git LFS)
Normal file
Binary file not shown.
BIN
content/posts/how-to-host-multiple-services-on-one-public-ip/img/image-8.png
(Stored with Git LFS)
Normal file
BIN
content/posts/how-to-host-multiple-services-on-one-public-ip/img/image-8.png
(Stored with Git LFS)
Normal file
Binary file not shown.
@ -0,0 +1,165 @@
|
||||
---
|
||||
title: "How to host multiple services on one public IP ?"
|
||||
date: 2022-02-28
|
||||
draft: false
|
||||
slug: "how-to-host-multiple-services-on-one-public-ip"
|
||||
tags: ["homelab", "reverse proxy"]
|
||||
---
|
||||
|
||||
You have services that you want to make accessible from outside? A Plex, a password manager, a website, ... You use a non professional connection with a dynamic external IP ? then you must have asked yourself the question of how to host more than one service with only one external IP address !
|
||||
|
||||
To solve this problem we will divide the problem in several parts. First, we will see how to link a dynamic external IP with a domain name. In a second step we will see how to create and redirect subdomains. And finally we will see how to link subdomains with the services you host.
|
||||
|
||||
---
|
||||
|
||||
## Dynamic IP
|
||||
As described in the intro one of the big problems of self hosted on a non professional connection is that in general the external IP of the router is not fixed. That means that at each restart, the IP has a chance to change. This is a big problem, indeed to link a domain to our site for example, the DNS must know the server IP and more precisely the external IP of our network. A solution would be to manually change the IP at each change, but this is clearly not practical and makes our service infrastructure very unstable. But there is an automatic solution to this problem: DDNS.
|
||||
|
||||
> Dynamic DNS (DDNS) is a method of automatically updating a name server in the Domain Name System (DNS), often in real time, with the active DDNS configuration of its configured hostnames, addresses or other information.
|
||||
> — <cite>[Wikipedia](https://en.wikipedia.org/wiki/Dynamic_DNS)</cite>
|
||||
|
||||
<img class="thumbnailshadow" src="img/image-1.png"/>
|
||||
|
||||
The operation of DDNS is separated into 3 parts:
|
||||
|
||||
1. A device in your network sends its IP and a Token ID to the DDNS server
|
||||
|
||||
2. The DDNS server receives the packet with the token and the IP, it makes the link with your duckdns sub-domain
|
||||
|
||||
3. You make a request on your duckdns sub-domain, you are redirected to your IP
|
||||
|
||||
When changing IP the DDNS server just has to update with the new IP. The IP address is usually updated every 10 minutes.
|
||||
|
||||
{{< alert >}}
|
||||
If you have a fixed IP with your internet provider, you do not need to do the following steps.
|
||||
{{< /alert >}}
|
||||
|
||||
In my case I decided to use [DuckDNS](https://www.duckdns.org), it's a free service and easily configurable. First you will have to create an account with the service of your choice. Then you have to get your token, it's your unique identifier that will allow DuckDNS to identify you.
|
||||
|
||||
<img class="thumbnailshadow" src="img/image-2.png"/>
|
||||
|
||||
You will now have to create a sub domain to the duckdns.org domain. To do this, simply fill in the "sub domain" field and click on "add domain".
|
||||
|
||||
<img class="thumbnailshadow" src="img/image-3.png"/>
|
||||
|
||||
Then go to your docker manager to install the [linuxserver/duckdns](https://hub.docker.com/r/linuxserver/duckdns) docker. The docker compose is quite simple, you just have to indicate the two following elements:
|
||||
|
||||
```
|
||||
SUBDOMAINS=xxxxx.duckdns.org
|
||||
TOKEN=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
|
||||
```
|
||||
|
||||
You can then launch the docker, if all is well configured you can return to DuckDNS and verify that it has received your IP:
|
||||
|
||||
<img class="thumbnailshadow" src="img/image-4.png"/>
|
||||
|
||||
## Sub-domain creation
|
||||
Now that we have a domain at DuckDNS, we will have to link our personal domain/sub-domain to the DuckDNS sub-domain.
|
||||
|
||||
Go to your domain name manager, for my part it's OVH. If it's not your case, it's okay, the process is the same.
|
||||
|
||||
To do the redirection you have to create DNS entries of type CNAME.
|
||||
|
||||
> A Canonical Name record (abbreviated as CNAME record) is a type of resource record in the Domain Name System (DNS) that maps one domain name (an alias) to another (the canonical name).
|
||||
> — <cite>[Wikipedia](https://en.wikipedia.org/wiki/CNAME_record)</cite>
|
||||
|
||||
To create a CNAME entry, all you need is a sub-domain and a target :
|
||||
|
||||
<img class="thumbnailshadow" src="img/image-5.png"/>
|
||||
|
||||
In this example I create a sub-domain "www.d3vyce.fr" which redirects to the DuckDNS domain "xxxx.duckdns.org". Once the information is propagated on the different DNS servers, I should be able to access the IP of my box via this sub-domain.
|
||||
|
||||
On the same principle we can create other subdomains:
|
||||
|
||||
```
|
||||
www IN CNAME xxxx.duckdns.org.
|
||||
plex IN CNAME xxxx.duckdns.org.
|
||||
radarr IN CNAME xxxx.duckdns.org.
|
||||
```
|
||||
{{< alert >}}
|
||||
If you have a fixed IP, you should make a type A entry rather than a CNAME entry, example :
|
||||
|
||||
```
|
||||
www IN A xxx.xxx.xxx.xxx
|
||||
```
|
||||
{{< /alert >}}
|
||||
|
||||
## Reverse Proxy
|
||||
We now have sub-domains that are linked to a DDNS which itself updates our IP dynamically. But how to link a sub-domain to an application. For example :
|
||||
|
||||
- Plex -> plex.d3vyce.fr
|
||||
- Ghost -> www.d3vyce.fr
|
||||
- ...
|
||||
|
||||
To do this we will set up a Reserve Proxy.
|
||||
|
||||
> A reverse proxy server is a type of proxy server that typically sits behind the firewall in a private network and directs client requests to the appropriate backend server.
|
||||
> — <cite>[Nginx.com](https://www.nginx.com/resources/glossary/reverse-proxy-server)</cite>
|
||||
|
||||
<img class="thumbnailshadow" src="img/image-6.png"/>
|
||||
|
||||
Globally the reverse proxy will inspect the source domain to determine which local IP/Port to redirect the request to.
|
||||
|
||||
The reverse proxy I chose is [Nginx Proxy Manager](https://nginxproxymanager.com). It is a reverse proxy that has a web interface and more importantly a Let's Encrypt SSL certificate manager integrated! This will allow us to use our services via HTTPS easily and without having to set up self-signed certificates.
|
||||
|
||||
First we will configure the port forwarding on our router: this will allow us to redirect our HTTP/HTTPS ports to our future docker Nginx Proxy Manager.
|
||||
|
||||
To do this, go to the administration interface of your router, for me it is accessible at the following address: http://192.168.1.1. In the network configurations look for a NAT tab. Create two rules as follows:
|
||||
|
||||
```
|
||||
Name External Port Internal Port Device
|
||||
HTTP 80 1480 [Docker IP]
|
||||
HTTPS 443 14443 [Docker IP]
|
||||
```
|
||||
{{< alert >}}
|
||||
I chose not to put the same input/output port, it is not mandatory but recommended!
|
||||
{{< /alert >}}
|
||||
|
||||
Now let's install our reverse proxy via docker. The docker-compose is quite simple the only elements to change are :
|
||||
|
||||
```
|
||||
ports:
|
||||
# These ports are in format <host-port>:<container-port>
|
||||
- '80:80' # Public HTTP Port
|
||||
- '443:443' # Public HTTPS Port
|
||||
```
|
||||
|
||||
|
||||
The port must correspond to the internal port you have configured in your NAT, in my case it gives :
|
||||
|
||||
```
|
||||
ports:
|
||||
# These ports are in format <host-port>:<container-port>
|
||||
- '1480:80' # Public HTTP Port
|
||||
- '14443:443' # Public HTTPS Port
|
||||
```
|
||||
|
||||
|
||||
You can now start the docker, if all goes well you should be able to go to the following address to access the web interface: http://[IP docker machine]:81. You can now connect with the following login/password:Email:
|
||||
|
||||
```
|
||||
admin@example.com
|
||||
Password: changeme
|
||||
```
|
||||
|
||||
After creating a user, we can add our first service! To do this go to the Hosts -> Proxy Hosts tab. Now click on "Add Proxy Host".
|
||||
|
||||
<img class="thumbnailshadow" src="img/image-7.png"/>
|
||||
|
||||
This is where we will have to fill in our sub-domain, local IP of the service and its port. In the example above, I configure the sub-domain "test.d3vyce.fr" with the local web server which is at the address 192.168.1.10:80.
|
||||
|
||||
The options "Cache Assets" and "Block Common Exploits" are not mandatory but recommended.
|
||||
|
||||
{{< alert >}}
|
||||
The "Websockets Support" option can be the cause of problems for some applications. For example for Home Assisant you have to activate it !
|
||||
{{< /alert >}}
|
||||
|
||||
Now let's configure our SSL certificate.
|
||||
|
||||
<img class="thumbnailshadow" src="img/image-8.png"/>
|
||||
|
||||
Select the option "Request a new SSL Certificate" then the options "Force SSL", "HTTP/2 Support" and "HSTS Enabled". Then fill in our email and accept the terms of service. You can now save. After a few seconds you should see the status "Online" for your subdomain. If you have no errors you can now access your service with this subdomain! Using the same principle, you can setup other services.
|
||||
|
||||
---
|
||||
|
||||
You now have a complete setup for a real selfhosting network. But that's not all, it's a simple setup for which we can add other things. Nginx Proxy Manager already has other features, for example the "Access Lists" which allow a simple authentication system. We could also set up a SSO system to increase security, a proxy and fail2ban with CloudFlare, ... (articles coming soon will cover these topics).
|
BIN
content/posts/how-to-index-your-blog-on-google/featured.png
(Stored with Git LFS)
Normal file
BIN
content/posts/how-to-index-your-blog-on-google/featured.png
(Stored with Git LFS)
Normal file
Binary file not shown.
BIN
content/posts/how-to-index-your-blog-on-google/img/image-1.png
(Stored with Git LFS)
Normal file
BIN
content/posts/how-to-index-your-blog-on-google/img/image-1.png
(Stored with Git LFS)
Normal file
Binary file not shown.
BIN
content/posts/how-to-index-your-blog-on-google/img/image-2.png
(Stored with Git LFS)
Normal file
BIN
content/posts/how-to-index-your-blog-on-google/img/image-2.png
(Stored with Git LFS)
Normal file
Binary file not shown.
BIN
content/posts/how-to-index-your-blog-on-google/img/image-3.png
(Stored with Git LFS)
Normal file
BIN
content/posts/how-to-index-your-blog-on-google/img/image-3.png
(Stored with Git LFS)
Normal file
Binary file not shown.
BIN
content/posts/how-to-index-your-blog-on-google/img/image-4.png
(Stored with Git LFS)
Normal file
BIN
content/posts/how-to-index-your-blog-on-google/img/image-4.png
(Stored with Git LFS)
Normal file
Binary file not shown.
BIN
content/posts/how-to-index-your-blog-on-google/img/image-5.png
(Stored with Git LFS)
Normal file
BIN
content/posts/how-to-index-your-blog-on-google/img/image-5.png
(Stored with Git LFS)
Normal file
Binary file not shown.
BIN
content/posts/how-to-index-your-blog-on-google/img/image-6.png
(Stored with Git LFS)
Normal file
BIN
content/posts/how-to-index-your-blog-on-google/img/image-6.png
(Stored with Git LFS)
Normal file
Binary file not shown.
BIN
content/posts/how-to-index-your-blog-on-google/img/image.png
(Stored with Git LFS)
Normal file
BIN
content/posts/how-to-index-your-blog-on-google/img/image.png
(Stored with Git LFS)
Normal file
Binary file not shown.
76
content/posts/how-to-index-your-blog-on-google/index.md
Normal file
76
content/posts/how-to-index-your-blog-on-google/index.md
Normal file
@ -0,0 +1,76 @@
|
||||
---
|
||||
title: "How to index your Blog on Google Search"
|
||||
date: 2022-02-01
|
||||
draft: false
|
||||
slug: "how-to-index-your-blog-on-google"
|
||||
tags: ["indexing"]
|
||||
---
|
||||
|
||||
Today, if you want your blog/portfolio to be visible to others it is almost mandatory that it be listed on google. For your site to appear on the search engine, Google needs information!
|
||||
|
||||
This information is usually collected automatically by a robot that Google has developed. This "Googlebot" wanders from site to site and from page to page to make a "map" of the web. This allows then to make the search engine of Google very relevant during your research.
|
||||
|
||||
If you want more details on how the Googlebot works, I let you see the documentation [here](https://developers.google.com/search/docs/advanced/crawling/overview-google-crawlers?visit_id=637792655651295943-3178388148&rd=1&ref=blog.d3vyce.fr)!
|
||||
|
||||
As I said above this process is automatic, but we can help it to do a better job and a more accurate scan of our site. This will allow us to be better referenced and potentially visible by more people!
|
||||
|
||||
---
|
||||
|
||||
First we will have to add our domain to the Google Search Console site. To access the site you must be logged in with a Google account.
|
||||
|
||||
https://search.google.com/u/0/search-console/welcome
|
||||
|
||||
You should land on the following page which gives us the choice between two options.
|
||||
|
||||
<img class="thumbnailshadow" src="img/image.png"/>
|
||||
|
||||
Domain : this option will be used only if you use your domain name only for your website and you don't have any subdomain. For example in my case I have several subdomains (ex. status.d3vyce.fr) but I don't want them to be indexed.
|
||||
|
||||
URL prefix : this second option allows to declare a precise URL and not an entire domain. This is the option that I choose for my blog.
|
||||
|
||||
You can then enter your domain. In my case it is https://www.d3vyce.fr. If all goes well the ownership verification of your domain is automatic. But if it's not the case, don't panic, an error message will tell you how to solve the problem. Globally, Google will provide you with a file that you should host on your site, this will verify that you have control of the site and therefore the domain.
|
||||
|
||||
<img class="thumbnailshadow" src="img/image-1.png"/>
|
||||
|
||||
From this moment, the google robot will have to visit your site soon to do a scan. We could stop here but we will provide additional information to help the robot !
|
||||
|
||||
To do so, we will provide google with a map of our site. These are usually self-managed files called "sitemap" in xml format.
|
||||
|
||||
The majority of CMS (Content management system) have this functionality integrated. In my case I use Ghost, so I will continue this tutorial with this example. If you use another CMS (eg Wordpress, Shopify, Hugo, ...) you should have quite a lot of resources on the subject with a google search.
|
||||
|
||||
In my case the sitemaps files are located at the following address: https://www.d3vyce.fr/sitemap.xml
|
||||
|
||||
<img class="thumbnailshadow" src="img/image-2.png"/>
|
||||
|
||||
This link leads to a sitemap index which is itself composed of several sitemap files. We have the choice to add file by file or directly the sitemap index.
|
||||
|
||||
To add our sitemap, we must go to Index > Sitemaps. Then we add the link to our index file.
|
||||
|
||||
<img class="thumbnailshadow" src="img/image-3.png"/>
|
||||
|
||||
After a few minutes we notice that our sitemap index has been detected and that our 5 sitemaps have been imported!
|
||||
|
||||
<img class="thumbnailshadow" src="img/image-4.png"/>
|
||||
<img class="thumbnailshadow" src="img/image-5.png"/>
|
||||
|
||||
After this step, there is nothing left to do but wait. This can take from a few hours to several weeks in some cases.---
|
||||
|
||||
|
||||
EDIT :
|
||||
After about 36 hours, my blog has been indexed on google and is now accessible with a simple search!
|
||||
|
||||
<img class="thumbnailshadow" src="img/image-6.png"/>
|
||||
|
||||
I then went to see the access logs of the site and we can observe the passages of the Googlebot which scans the site:
|
||||
```
|
||||
[02/Feb/2022:03:11:17 +0100] - 200 200 - GET https www.d3vyce.fr "/favicon.ico" [Client 66.249.69.27] [Length 3798] [Gzip -] [Sent-to X.X.X.X] "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" "-"
|
||||
[02/Feb/2022:03:26:17 +0100] - 404 404 - GET https www.d3vyce.fr "/contact/" [Client 66.249.69.29] [Length 660] [Gzip -] [Sent-to X.X.X.X] "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" "-"
|
||||
[02/Feb/2022:02:16:53 +0100] - 200 200 - GET https www.d3vyce.fr "/author/nicolas/" [Client 66.249.69.27] [Length 5923] [Gzip -] [Sent-to X.X.X.X] "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" "-"
|
||||
[02/Feb/2022:01:52:01 +0100] - 200 200 - GET https www.d3vyce.fr "/sitemap.xml" [Client 66.249.69.27] [Length 637] [Gzip -] [Sent-to X.X.X.X] "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" "-"
|
||||
```
|
||||
|
||||
I could also see the passage of other bot like Bing :
|
||||
```
|
||||
[02/Feb/2022:00:34:43 +0100] - 200 200 - GET https www.d3vyce.fr "/robots.txt" [Client 40.77.167.104] [Length 107] [Gzip -] [Sent-to X.X.X.X] "Mozilla/5.0 (compatible; bingbot/2.0; +http://www.bing.com/bingbot.htm)" "-"
|
||||
[02/Feb/2022:00:34:51 +0100] - 200 200 - GET https www.d3vyce.fr "/" [Client 40.77.167.40] [Length 6253] [Gzip -] [Sent-to X.X.X.X] "Mozilla/5.0 (compatible; bingbot/2.0; +http://www.bing.com/bingbot.htm)" "-"
|
||||
```
|
BIN
content/posts/how-to-make-daily-backups-of-your-homelab/featured.png
(Stored with Git LFS)
Normal file
BIN
content/posts/how-to-make-daily-backups-of-your-homelab/featured.png
(Stored with Git LFS)
Normal file
Binary file not shown.
BIN
content/posts/how-to-make-daily-backups-of-your-homelab/img/image-1.png
(Stored with Git LFS)
Normal file
BIN
content/posts/how-to-make-daily-backups-of-your-homelab/img/image-1.png
(Stored with Git LFS)
Normal file
Binary file not shown.
BIN
content/posts/how-to-make-daily-backups-of-your-homelab/img/image-2.png
(Stored with Git LFS)
Normal file
BIN
content/posts/how-to-make-daily-backups-of-your-homelab/img/image-2.png
(Stored with Git LFS)
Normal file
Binary file not shown.
BIN
content/posts/how-to-make-daily-backups-of-your-homelab/img/image-3.png
(Stored with Git LFS)
Normal file
BIN
content/posts/how-to-make-daily-backups-of-your-homelab/img/image-3.png
(Stored with Git LFS)
Normal file
Binary file not shown.
BIN
content/posts/how-to-make-daily-backups-of-your-homelab/img/image-4.png
(Stored with Git LFS)
Normal file
BIN
content/posts/how-to-make-daily-backups-of-your-homelab/img/image-4.png
(Stored with Git LFS)
Normal file
Binary file not shown.
108
content/posts/how-to-make-daily-backups-of-your-homelab/index.md
Normal file
108
content/posts/how-to-make-daily-backups-of-your-homelab/index.md
Normal file
@ -0,0 +1,108 @@
|
||||
---
|
||||
title: "How to make daily backups of your HomeLab?"
|
||||
date: 2022-07-19
|
||||
draft: false
|
||||
slug: "how-to-make-daily-backups-of-your-homelab"
|
||||
tags: ["backup", "homelab"]
|
||||
---
|
||||
|
||||
You have your local homelab in which you store all your data. You have set up security measures to avoid data loss: parity disk, hot spare, cold spare, RAID, ... But in the event that your server burns down, a power surge fries your server and the data it contains, ... are you ready?
|
||||
|
||||
In this article we will talk about the 3-2-1 Backup Rule. But what is it?
|
||||
|
||||
> The term 3-2-1 was coined by US photographer Peter Krogh while writing a book about digital asset management in the early noughties.
|
||||
> - 3: The rule said there should be three copies of data. One is the original or production copy, then there should be two more copies, making three.
|
||||
> - 2: The two backup copies should be on different media. The theory here was that should one backup be corrupted or destroyed, then another would be available. Those threats could be IT-based, which dictated that data be held in a discrete system or media, or physical.
|
||||
> - 1: The final “one” referred to the rule that one copy of the two backups should be taken off-site, so that anything that affected the first copy would not (hopefully) affect it.
|
||||
> — <cite>[computerweekly.com](computerweekly.com)</cite>
|
||||
|
||||
<img class="thumbnailshadow" src="img/image-1.png"/>
|
||||
|
||||
3-2-1 Backup Rule allows to store data with almost no risk of loss. Today many people agree that this rule does not really make sense with the arrival of the cloud. Indeed, providers such as Google, Amazon, ... have replication systems on several disks, but especially in several locations. All this remains invisible for the user, but these securities are well present.
|
||||
|
||||
---
|
||||
|
||||
In our case the NAS, depending on how it is configured, has a security for data loss related to the death of a disk: Parity, RAID, ... This can be considered as 2 copies. But as said in the intro, these 2 copies are in the same server and at best in the apartment/home. If there is a fire, there may be 2 copies, both will go up in smoke.
|
||||
|
||||
To stay in the Homelab philosophy, the ideal would be to have the possibility to deploy a second server in another location (spaced of several kilometers as far as possible) and to make a replication of the data. This solution, although ideal, is not possible for everyone, either in terms of cost or the possibility of having a space where to put the server.
|
||||
|
||||
That's why I tried to find another solution. This solution is the Cloud! For cheap, or even free, we will be able to save our data more important in an automatic way. With even a version system to keep up to 30 days of history!
|
||||
|
||||
## Cloud Provider
|
||||
The choice of cloud provider is quite personal, in my case I chose Google Drive because I already had an account and I know that there is a history feature integrated.
|
||||
|
||||
Depending on the space you want, it may be more interesting to explore other options.
|
||||
|
||||
Rclone offers a large selection for which you can find a list at the following address: [https://rclone.org/](https://rclone.org/?ref=blog.d3vyce.fr)
|
||||
|
||||
## Rclone setup
|
||||
Once you have chosen your cloud provider, you must now set it up on your server to be able to make backups.
|
||||
|
||||
In my case I will follow the tutorial for Google Drive at the following address: [Link](https://ostechnix.com/mount-google-drive-using-rclone-in-linux/?ref=blog.d3vyce.fr). There are also guides for the different solutions on the Rclone website but I found that they were not as complete.
|
||||
|
||||
Once the configuration is done I can check that it works with the following command:
|
||||
|
||||
```
|
||||
xxxxxx@Mark1:~# rclone about google:
|
||||
Total: 100 GiB
|
||||
Used: 3.504 GiB
|
||||
Free: 93.562 GiB
|
||||
Trashed: 3.746 MiB
|
||||
Other: 2.934 GiB
|
||||
```
|
||||
|
||||
## Backup setup
|
||||
Now that we have access to our cloud provider, we just need to automate the backup.
|
||||
This first script allows us to mountour cloud folder in order to access it:
|
||||
|
||||
```
|
||||
#!/bin/bash
|
||||
|
||||
mkdir -p /mnt/disks/google
|
||||
rclone mount --max-read-ahead 1024k --allow-other google: /mnt/disks/google &
|
||||
```
|
||||
|
||||
|
||||
|
||||
This second script allows to unmount the cloud folder if we stop the server:
|
||||
```
|
||||
#!/bin/bash
|
||||
|
||||
fusermount -u /mnt/disks/google
|
||||
```
|
||||
|
||||
|
||||
This last script is the most important one, it's here that we choose what to save! First I create an archive save_mark1.zip in which I add as many folders as I want. In the example I add /mnt/user/d3vyce and /mnt/user/appdata/ghost. Then I clone the archive on the cloud provider set up before: Google. Finally I delete the archive locally.
|
||||
|
||||
```
|
||||
#!/bin/bash
|
||||
|
||||
zip -r /tmp/save_mark1.zip /mnt/user/d3vyce /mnt/user/appdata/ghost [...]
|
||||
rclone sync /tmp/save_mark1.zip google:save_mark1
|
||||
rm /tmp/save_mark1.zip
|
||||
```
|
||||
|
||||
In the script plugin of Unraid I add the different scripts with the following execution conditions:
|
||||
- **mount** -> At Startup of Array
|
||||
- **backup** -> Scheduled Daily
|
||||
- **unmount** -> At Stopping of Array
|
||||
|
||||
This should result in the following:
|
||||
|
||||
<img class="thumbnailshadow" src="img/image-2.png"/>
|
||||
|
||||
After waiting one day I check on the drive that the backup has been done:
|
||||
|
||||
<img class="thumbnailshadow" src="img/image-3.png"/>
|
||||
|
||||
And indeed there is an archive of about 1Gb that has been uploaded in the save_mark1 folder. The system works !
|
||||
|
||||
I then let the script run for several days to see if the history system works well. As you can see I have a history of the file for about 30 days. An interesting thing to know is that only the archive consumes space on my drive and not all the versions in the history. This makes it consume ~1Gb with 30 versions of the archive accessible.
|
||||
|
||||
<img class="thumbnailshadow" src="img/image-4.png"/>
|
||||
|
||||
## Conclusion
|
||||
|
||||
In conclusion this solution is clearly very practical and potentially free depending on the amount of data you want to backup. There are a lot of features possible with Rclone (data encryption, logging, ...) I let you explore!
|
||||
|
||||
In my case it allows me to save sensitive data such as the database of this blog for which I had no offsite backup.
|
BIN
content/posts/my-current-homelab/featured.png
(Stored with Git LFS)
Normal file
BIN
content/posts/my-current-homelab/featured.png
(Stored with Git LFS)
Normal file
Binary file not shown.
BIN
content/posts/my-current-homelab/icon/Radarr.png
(Stored with Git LFS)
Normal file
BIN
content/posts/my-current-homelab/icon/Radarr.png
(Stored with Git LFS)
Normal file
Binary file not shown.
BIN
content/posts/my-current-homelab/img/image-1.png
(Stored with Git LFS)
Normal file
BIN
content/posts/my-current-homelab/img/image-1.png
(Stored with Git LFS)
Normal file
Binary file not shown.
BIN
content/posts/my-current-homelab/img/image-2.png
(Stored with Git LFS)
Normal file
BIN
content/posts/my-current-homelab/img/image-2.png
(Stored with Git LFS)
Normal file
Binary file not shown.
BIN
content/posts/my-current-homelab/img/image-3.png
(Stored with Git LFS)
Normal file
BIN
content/posts/my-current-homelab/img/image-3.png
(Stored with Git LFS)
Normal file
Binary file not shown.
116
content/posts/my-current-homelab/index.md
Normal file
116
content/posts/my-current-homelab/index.md
Normal file
@ -0,0 +1,116 @@
|
||||
---
|
||||
title: "My current Homelab"
|
||||
date: 2022-02-11
|
||||
draft: false
|
||||
slug: "my-current-homelab"
|
||||
tags: ["homelab"]
|
||||
---
|
||||
|
||||
As you may or may not know, I have a homelab at home. For people who don't know what it is :
|
||||
|
||||
> A home lab is essentially a compounded system that connects all your devices. Thus, creating an environment for you to experiment and build new projects at the comfort of your home!
|
||||
|
||||
A homelab is neither more nor less than a local network allowing experimentation, self-hosting of services, ...
|
||||
|
||||
Initially, my goal was to create a media server accessible from anywhere with Plex. That was the initial objective, but you will see that today my homelab is much more extensive than that. I have over 20 selfhosted services ranging from password manager, web hosting, game server, etc.
|
||||
|
||||
To present you all this I will first list the hardware, then the topology I used and the presentation of the services I host. And I will finish with a conclusion on the first 3 years of my homelab!
|
||||
|
||||
---
|
||||
|
||||
## Hardware
|
||||
Currently my homelab is composed of the following elements:
|
||||
### Networking
|
||||
- Unifi Dream Machine (Router/Firewall)
|
||||
- Unifi Switch Lite 8 PoE
|
||||
- Unifi AP WiFi 6 Lite
|
||||
- Unifi AP WiFi 6 LR
|
||||
- Netgear GSS108E 8-Port
|
||||
### Mark1 : Storage server / VM / Docker
|
||||
- AMD Ryzen 7 - 1700X (8/16 core)
|
||||
- NVIDIA 750 ti
|
||||
- 32 GiB DDR4
|
||||
- 2* 500GB SSD (Cache)
|
||||
- 2* 4TB HDD (Parity)
|
||||
- 2* 3TB + 2* 4TB HDD = 18TB (Storage)
|
||||
### Mark2 : Gaming server / Lab
|
||||
- Intel i7 - 7700K (4/8 core)
|
||||
- 32 GiB DDR4
|
||||
- 500GB SSD
|
||||
### HOME NAS : Backup storage
|
||||
- Synology DS418
|
||||
- 3* 4TB HDD
|
||||
|
||||
---
|
||||
|
||||
## Topology
|
||||
<img class="thumbnailshadow" src="img/image-1.png"/>
|
||||
|
||||
In terms of network architecture it is quite simple, there is only one subnet, the 10.0.0.0/24, that I have subdivided for the different equipment:
|
||||
- 10.0.0.1 : Unifi Dream Machine
|
||||
- 10.0.0.2->9 : Network Equipment
|
||||
- 10.0.0.10->29 : Server/Service
|
||||
- 10.0.0.30->250 : DHCP leases
|
||||
|
||||
No VLAN, no multiple subnets, ... Nothing very complicated in short! This network topology has some limitations for which I will come back in conclusion (a v2 of the homelab is in development/deployment).
|
||||
|
||||
The vast majority of services/VM/storage are on the Mark1 server. This server is under Unraid, it is an OS based on a Linux kernel and offers a multitude of options in addition to its main function of NAS.
|
||||
|
||||
[Unraid](https://unraid.net) is a paid OS that is offered in 3 versions:
|
||||
|
||||
<img class="thumbnailshadow" src="img/image-2.png"/>
|
||||
|
||||
The only difference is the number of storage devives we can install in our server. In my case I am on the "Plus" version. It's a one time payment that allows you to unlock all the features.
|
||||
|
||||
You can try the OS in a demo version that is valid for 30 days, which gives you time to choose the OS that suits you. If you don't want to pay, there are alternatives. For example [TrueNAS](https://www.truenas.com) which has many features in common and is updated regularly.
|
||||
|
||||
I personally chose Unraid for its stability and the numerous virtualization features which, at the time of the creation of the server, were not 100% perfected by TrueNAS.
|
||||
|
||||
Mark2 is a server under Ubuntu Server, it is notably used for game servers (Minecraft, Rust, ...). I use the [AMP](https://cubecoders.com/AMP) service for that. In addition to the game servers, I use it as a test server for projects I'm doing.
|
||||
|
||||
---
|
||||
|
||||
## Services
|
||||
As you can see on the diagram there are many services running in my homelab. Most of them are on the "Mark1" server and are Dockers.
|
||||
| | Name | Description |
|
||||
| ----------- | ----------- | ----------- |
|
||||
|  | Radarr | Movie collection manager |
|
||||
|  | Sonarr | Series collection manager |
|
||||
|  | Bazzar | Subtittle finder for movie and series |
|
||||
|  | Jackett | Proxy server for indexer |
|
||||
|  | AdGuardHome | DNS for blocking ads and tracking |
|
||||
|  | Bitwarden | Password manager |
|
||||
|  | Deluge | Torrent downloader |
|
||||
|  | Gitea | Local github |
|
||||
|  | Home Assistant | IOT manager (Zigbee) |
|
||||
|  | Nginx Proxy Manager | Reverse Proxy |
|
||||
|  | Plex | Movie and series remote access |
|
||||
|
||||
|
||||
|
||||
In addition to these services, I have two database managers: MariaDB and Redis. I have a VPN service allowing me to connect to the LAN from outside: Wireguard. And a backup VPN present on the Home NAS: OpenVPN.
|
||||
|
||||
In terms of VMs on Mark1, I have 2 Ubuntu VMs for web hosting. A GNS3 VM for network tests. A VM containing Home Assistant. A Debian VM for a Docker project in progress and a Kali VM to do Pentesting and have access to cyber tools in remote.
|
||||
|
||||
<img class="thumbnailshadow" src="img/image-3.png"/>
|
||||
|
||||
---
|
||||
|
||||
## Conclusion
|
||||
What I retain from this Homelab is that despite the basic objectives I had set for myself. The network is constantly evolving, I always find new things to do to improve it, add new services, increase security, ...
|
||||
|
||||
Overall this experience is very rewarding, I am always looking for improvement and learn a lot in several areas: supervision, network, quality of service, ... I recommend it to anyone in the networking field who is curious to learn new things and put them into practice.
|
||||
|
||||
This may seem like a lot of hardware, but it's an accumulation of more than 3/4 years, so if you want to get started, a simple old laptop or a Raspberry Pi may be enough.
|
||||
|
||||
To learn more and better discover the world of Homelab, I advise you to take a look at this sub Reddit: [Homelab](https://www.reddit.com/r/homelab). There are many resources, whether on hardware, software, topology, ...
|
||||
|
||||
### Futur plans for Homelab v2
|
||||
- This topology had been created as a temporary base, but it poses many problems, especially in terms of security. I am updating this topology with with several new improvements:
|
||||
- VLAN
|
||||
- Multiple Subnet
|
||||
- Monitoring
|
||||
- Intrusion Detection
|
||||
- Switch to OPNSense
|
||||
|
||||
There will be a blog post about the V2 of my homelab in the coming months!
|
BIN
content/posts/test/featured.png
(Stored with Git LFS)
BIN
content/posts/test/featured.png
(Stored with Git LFS)
Binary file not shown.
@ -1,235 +0,0 @@
|
||||
---
|
||||
title: "Advanced Customisation"
|
||||
date: 2020-08-08
|
||||
draft: false
|
||||
description: "Learn how to build Blowfish manually."
|
||||
slug: "advanced-customisation"
|
||||
tags: ["advanced", "css", "docs"]
|
||||
# series: ["Documentation"]
|
||||
# series_order: 13
|
||||
---
|
||||
|
||||
There are many ways you can make advanced changes to Blowfish. Read below to learn more about what can be customised and the best way of achieving your desired result.
|
||||
|
||||
If you need further advice, post your questions on [GitHub Discussions](https://github.com/nunocoracao/blowfish/discussions).
|
||||
|
||||
## Hugo project structure
|
||||
|
||||
Before leaping into it, first a quick note about [Hugo project structure](https://gohugo.io/getting-started/directory-structure/) and best practices for managing your content and theme customisations.
|
||||
|
||||
{{< alert >}}
|
||||
**In summary:** Never directly edit the theme files. Only make customisations in your Hugo project's sub-directories, not in the themes directory itself.
|
||||
{{< /alert >}}
|
||||
|
||||
Blowfish is built to take advantage of all the standard Hugo practices. It is designed to allow all aspects of the theme to be customised and overridden without changing any of the core theme files. This allows for a seamless upgrade experience while giving you total control over the look and feel of your website.
|
||||
|
||||
In order to achieve this, you should never manually adjust any of the theme files directly. Whether you install using Hugo modules, as a git submodule or manually include the theme in your `themes/` directory, you should always leave these files intact.
|
||||
|
||||
The correct way to adjust any theme behaviour is by overriding files using Hugo's powerful [file lookup order](https://gohugo.io/templates/lookup-order/). In summary, the lookup order ensures any files you include in your project directory will automatically take precedence over any theme files.
|
||||
|
||||
For example, if you wanted to override the main article template in Blowfish, you can simply create your own `layouts/_default/single.html` file and place it in the root of your project. This file will then override the `single.html` from the theme without ever changing the theme itself. This works for any theme files - HTML templates, partials, shortcodes, config files, data, assets, etc.
|
||||
|
||||
As long as you follow this simple practice, you will always be able to update the theme (or test different theme versions) without worrying that you will lose any of your custom changes.
|
||||
|
||||
## Change image optimization settings
|
||||
|
||||
Hugo has various builtin methods to resize, crop and optimize images.
|
||||
|
||||
As an example - in `layouts/partials/article-link/card.html`, you have the following code:
|
||||
|
||||
```go
|
||||
{{ with .Resize "600x" }}
|
||||
<div class="w-full thumbnail_card nozoom" style="background-image:url({{ .RelPermalink }});"></div>
|
||||
{{ end }}
|
||||
```
|
||||
|
||||
The default behavior of Hugo here is to resize the image to 600px keeping the ratio.
|
||||
|
||||
It is worth noting here that default image configurations such as [anchor point](https://gohugo.io/content-management/image-processing/#anchor) can also be set in your [site configuration](https://gohugo.io/content-management/image-processing/#processing-options) as well as in the template itself.
|
||||
|
||||
See the [Hugo docs on image processing](https://gohugo.io/content-management/image-processing/#image-processing-methods) for more info.
|
||||
|
||||
## Colour schemes
|
||||
|
||||
Blowfish ships with a number of colour schemes out of the box. To change the basic colour scheme, you can set the `colorScheme` theme parameter. Refer to the [Getting Started]() section to learn more about the built-in schemes.
|
||||
|
||||
In addition to the default schemes, you can also create your own and re-style the entire website to your liking. Schemes are created by by placing a `<scheme-name>.css` file in the `assets/css/schemes/` folder. Once the file is created, simply refer to it by name in the theme configuration.
|
||||
|
||||
{{< alert "github">}}
|
||||
**Note:** generating these files manually can be hard, I've built a `nodejs` terminal tool to help with that, [Fugu](https://github.com/nunocoracao/fugu). In a nutshell, you pass the main three `hex` values of your color palette and the program will output a css file that can be imported directly into Blowfish.
|
||||
{{< /alert >}}
|
||||
|
||||
|
||||
Blowfish defines a three-colour palette that is used throughout the theme. The three colours are defined as `neutral`, `primary` and `secondary` variants, each containing ten shades of colour.
|
||||
|
||||
Due to the way Tailwind CSS 3.0 calculates colour values with opacity, the colours specified in the scheme need to [conform to a particular format](https://github.com/adamwathan/tailwind-css-variable-text-opacity-demo) by providing the red, green and blue colour values.
|
||||
|
||||
```css
|
||||
:root {
|
||||
--color-primary-500: 139, 92, 246;
|
||||
}
|
||||
```
|
||||
|
||||
This example defines a CSS variable for the `primary-500` colour with a red value of `139`, green value of `92` and blue value of `246`.
|
||||
|
||||
Use one of the existing theme stylesheets as a template. You are free to define your own colours, but for some inspiration, check out the official [Tailwind colour palette reference](https://tailwindcss.com/docs/customizing-colors#color-palette-reference).
|
||||
|
||||
## Overriding the stylesheet
|
||||
|
||||
Sometimes you need to add a custom style to style your own HTML elements. Blowfish provides for this scenario by allowing you to override the default styles in your own CSS stylesheet. Simply create a `custom.css` file in your project's `assets/css/` folder.
|
||||
|
||||
The `custom.css` file will be minified by Hugo and loaded automatically after all the other theme styles which means anything in your custom file will take precedence over the defaults.
|
||||
|
||||
### Using additional fonts
|
||||
|
||||
Blowfish allows you to easily change the font for your site. After creating a `custom.css` file in your project's `assets/css/` folder, place you font file inside a `fonts` folder within the `static` root folder.
|
||||
|
||||
```shell
|
||||
.
|
||||
├── assets
|
||||
│ └── css
|
||||
│ └── custom.css
|
||||
...
|
||||
└─── static
|
||||
└── fonts
|
||||
└─── font.ttf
|
||||
|
||||
```
|
||||
|
||||
This makes the font available to the website. Now, the font can just import it in your `custom.css` and replaced wherever you see fit. The example below shows what replacing the font for the entire `html` would look like.
|
||||
|
||||
```css
|
||||
@font-face {
|
||||
font-family: font;
|
||||
src: url('/fonts/font.ttf');
|
||||
}
|
||||
|
||||
html {
|
||||
font-family: font;
|
||||
}
|
||||
```
|
||||
|
||||
### Adjusting the font size
|
||||
|
||||
Changing the font size of your website is one example of overriding the default stylesheet. Blowfish makes this simple as it uses scaled font sizes throughout the theme which are derived from the base HTML font size. By default, Tailwind sets the default size to `12pt`, but it can be changed to whatever value you prefer.
|
||||
|
||||
Create a `custom.css` file using the [instructions above]({{< ref "#overriding-the-stylesheet" >}}) and add the following CSS declaration:
|
||||
|
||||
```css
|
||||
/* Increase the default font size */
|
||||
html {
|
||||
font-size: 13pt;
|
||||
}
|
||||
```
|
||||
|
||||
Simply by changing this one value, all the font sizes on your website will be adjusted to match this new size. Therefore, to increase the overall font sizes used, make the value greater than `12pt`. Similarly, to decrease the font sizes, make the value less than `12pt`.
|
||||
|
||||
## Building the theme CSS from source
|
||||
|
||||
If you'd like to make a major change, you can take advantage of Tailwind CSS's JIT compiler and rebuild the entire theme CSS from scratch. This is useful if you want to adjust the Tailwind configuration or add extra Tailwind classes to the main stylesheet.
|
||||
|
||||
{{< alert >}}
|
||||
**Note:** Building the theme manually is intended for advanced users.
|
||||
{{< /alert >}}
|
||||
|
||||
Let's step through how building the Tailwind CSS works.
|
||||
|
||||
### Tailwind configuration
|
||||
|
||||
In order to generate a CSS file that only contains the Tailwind classes that are actually being used the JIT compiler needs to scan through all the HTML templates and Markdown content files to check which styles are present in the markup. The compiler does this by looking at the `tailwind.config.js` file which is included in the root of the theme directory:
|
||||
|
||||
```js
|
||||
// themes/blowfish/tailwind.config.js
|
||||
|
||||
module.exports = {
|
||||
content: [
|
||||
"./layouts/**/*.html",
|
||||
"./content/**/*.{html,md}",
|
||||
"./themes/blowfish/layouts/**/*.html",
|
||||
"./themes/blowfish/content/**/*.{html,md}",
|
||||
],
|
||||
|
||||
// and more...
|
||||
};
|
||||
```
|
||||
|
||||
This default configuration has been included with these content paths so that you can easily generate your own CSS file without needing to modify it, provided you follow a particular project structure. Namely, **you have to include Blowfish in your project as a subdirectory at `themes/blowfish/`**. This means you cannot easily use Hugo Modules to install the theme and you must go down either the git submodule (recommended) or manual install routes. The [Installation docs]() explain how to install the theme using either of these methods.
|
||||
|
||||
### Project structure
|
||||
|
||||
In order to take advantage of the default configuration, your project should look something like this...
|
||||
|
||||
```shell
|
||||
.
|
||||
├── assets
|
||||
│ └── css
|
||||
│ └── compiled
|
||||
│ └── main.css # this is the file we will generate
|
||||
├── config # site config
|
||||
│ └── _default
|
||||
├── content # site content
|
||||
│ ├── _index.md
|
||||
│ ├── projects
|
||||
│ │ └── _index.md
|
||||
│ └── blog
|
||||
│ └── _index.md
|
||||
├── layouts # custom layouts for your site
|
||||
│ ├── partials
|
||||
│ │ └── extend-article-link/simple.html
|
||||
│ ├── projects
|
||||
│ │ └── list.html
|
||||
│ └── shortcodes
|
||||
│ └── disclaimer.html
|
||||
└── themes
|
||||
└── blowfish # git submodule or manual theme install
|
||||
```
|
||||
|
||||
This example structure adds a new `projects` content type with its own custom layout along with a custom shortcode and extended partial. Provided the project follows this structure, all that's required is to recompile the `main.css` file.
|
||||
|
||||
### Install dependencies
|
||||
|
||||
In order for this to work you'll need to change into the `themes/blowfish/` directory and install the project dependencies. You'll need [npm](https://docs.npmjs.com/cli/v7/configuring-npm/install) on your local machine for this step.
|
||||
|
||||
```shell
|
||||
cd themes/blowfish
|
||||
npm install
|
||||
```
|
||||
|
||||
### Run the Tailwind compiler
|
||||
|
||||
With the dependencies installed all that's left is to use [Tailwind CLI](https://v2.tailwindcss.com/docs/installation#using-tailwind-cli) to invoke the JIT compiler. Navigate back to the root of your Hugo project and issue the following command:
|
||||
|
||||
```shell
|
||||
cd ../..
|
||||
./themes/blowfish/node_modules/tailwindcss/lib/cli.js -c ./themes/blowfish/tailwind.config.js -i ./themes/blowfish/assets/css/main.css -o ./assets/css/compiled/main.css --jit
|
||||
```
|
||||
|
||||
It's a bit of an ugly command due to the paths involved but essentially you're calling Tailwind CLI and passing it the location of the Tailwind config file (the one we looked at above), where to find the theme's `main.css` file and then where you want the compiled CSS file to be placed (it's going into the `assets/css/compiled/` folder of your Hugo project).
|
||||
|
||||
The config file will automatically inspect all the content and layouts in your project as well as all those in the theme and build a new CSS file that contains all the CSS required for your website. Due to the way Hugo handles file hierarchy, this file in your project will now automatically override the one that comes with the theme.
|
||||
|
||||
Each time you make a change to your layouts and need new Tailwind CSS styles, you can simply re-run the command and generate the new CSS file. You can also add `-w` to the end of the command to run the JIT compiler in watch mode.
|
||||
|
||||
### Make a build script
|
||||
|
||||
To fully complete this solution, you can simplify this whole process by adding aliases for these commands, or do what I do and add a `package.json` to the root of your project which contains the necessary scripts...
|
||||
|
||||
```js
|
||||
// package.json
|
||||
|
||||
{
|
||||
"name": "my-website",
|
||||
"version": "1.0.0",
|
||||
"description": "",
|
||||
"scripts": {
|
||||
"server": "hugo server -b http://localhost -p 8000",
|
||||
"dev": "NODE_ENV=development ./themes/blowfish/node_modules/tailwindcss/lib/cli.js -c ./themes/blowfish/tailwind.config.js -i ./themes/blowfish/assets/css/main.css -o ./assets/css/compiled/main.css --jit -w",
|
||||
"build": "NODE_ENV=production ./themes/blowfish/node_modules/tailwindcss/lib/cli.js -c ./themes/blowfish/tailwind.config.js -i ./themes/blowfish/assets/css/main.css -o ./assets/css/compiled/main.css --jit"
|
||||
},
|
||||
// and more...
|
||||
}
|
||||
```
|
||||
|
||||
Now when you want to work on designing your site, you can invoke `npm run dev` and the compiler will run in watch mode. When you're ready to deploy, run `npm run build` and you'll get a clean Tailwind CSS build.
|
||||
|
||||
🙋♀️ If you need help, feel free to ask a question on [GitHub Discussions](https://github.com/nunocoracao/blowfish/discussions).
|
BIN
static/android-chrome-192x192.png
(Stored with Git LFS)
Normal file
BIN
static/android-chrome-192x192.png
(Stored with Git LFS)
Normal file
Binary file not shown.
BIN
static/android-chrome-512x512.png
(Stored with Git LFS)
Normal file
BIN
static/android-chrome-512x512.png
(Stored with Git LFS)
Normal file
Binary file not shown.
BIN
static/apple-touch-icon.png
(Stored with Git LFS)
Normal file
BIN
static/apple-touch-icon.png
(Stored with Git LFS)
Normal file
Binary file not shown.
BIN
static/favicon-16x16.png
(Stored with Git LFS)
Normal file
BIN
static/favicon-16x16.png
(Stored with Git LFS)
Normal file
Binary file not shown.
BIN
static/favicon-32x32.png
(Stored with Git LFS)
Normal file
BIN
static/favicon-32x32.png
(Stored with Git LFS)
Normal file
Binary file not shown.
BIN
static/favicon.ico
(Stored with Git LFS)
Normal file
BIN
static/favicon.ico
(Stored with Git LFS)
Normal file
Binary file not shown.
1
static/site.webmanifest
Normal file
1
static/site.webmanifest
Normal file
@ -0,0 +1 @@
|
||||
{"name":"","short_name":"","icons":[{"src":"/android-chrome-192x192.png","sizes":"192x192","type":"image/png"},{"src":"/android-chrome-512x512.png","sizes":"512x512","type":"image/png"}],"theme_color":"#ffffff","background_color":"#ffffff","display":"standalone"}
|
Loading…
x
Reference in New Issue
Block a user