convert pgn into webp + start hugo migration article
All checks were successful
Build Blog Docker Image / build docker (push) Successful in 1m1s
All checks were successful
Build Blog Docker Image / build docker (push) Successful in 1m1s
This commit is contained in:
parent
b2a0e46d4e
commit
052a6334cb
@ -9,7 +9,7 @@ jobs:
|
||||
build docker:
|
||||
runs-on: linux_amd
|
||||
steps:
|
||||
- name: checkout code
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v3
|
||||
# with:
|
||||
# lfs: 'true'
|
||||
|
@ -1,3 +1,4 @@
|
||||
# Build Stage
|
||||
FROM git.d3vyce.fr/d3vyce/hugo:latest AS build
|
||||
|
||||
WORKDIR /opt/blog
|
||||
@ -6,12 +7,11 @@ COPY . /opt/blog/
|
||||
RUN git submodule update --init --recursive
|
||||
RUN hugo
|
||||
|
||||
|
||||
# Publish Stage
|
||||
FROM nginx:1.25-alpine
|
||||
|
||||
WORKDIR /usr/share/nginx/html
|
||||
COPY --from=build /opt/blog/public /usr/share/nginx/html/
|
||||
COPY nginx/nginx.conf /etc/nginx/nginx.conf
|
||||
COPY nginx/default.conf /etc/nginx/conf.d/default.conf
|
||||
COPY nginx/ /etc/nginx/
|
||||
|
||||
EXPOSE 80/tcp
|
||||
|
@ -6,4 +6,6 @@ export PATH=$PATH:/usr/local/go/bin
|
||||
CGO_ENABLED=1 go install -tags extended github.com/gohugoio/hugo@latest
|
||||
git submodule update --recursive
|
||||
git lfs pull
|
||||
|
||||
hugo server --buildDrafts
|
||||
```
|
BIN
assets/img/author.png
(Stored with Git LFS)
BIN
assets/img/author.png
(Stored with Git LFS)
Binary file not shown.
BIN
assets/img/author.webp
(Stored with Git LFS)
Normal file
BIN
assets/img/author.webp
(Stored with Git LFS)
Normal file
Binary file not shown.
BIN
assets/img/author_transparent.png
(Stored with Git LFS)
BIN
assets/img/author_transparent.png
(Stored with Git LFS)
Binary file not shown.
BIN
assets/img/author_transparent.webp
(Stored with Git LFS)
Normal file
BIN
assets/img/author_transparent.webp
(Stored with Git LFS)
Normal file
Binary file not shown.
@ -8,7 +8,7 @@ title = "d3vyce Blog"
|
||||
isoCode = "en"
|
||||
rtl = false
|
||||
dateFormat = "2 January 2006"
|
||||
logo = "img/author_transparent.png"
|
||||
logo = "img/author_transparent.webp"
|
||||
# secondaryLogo = "img/secondary-logo.png"
|
||||
description = "Hi 👋, Welcome to my Blog!"
|
||||
copyright = "d3vyce 2024 © All rights reserved."
|
||||
|
@ -37,7 +37,7 @@ smartTOCHideUnfocusedChildren = false
|
||||
|
||||
[homepage]
|
||||
layout = "custom" # valid options: page, profile, hero, card, background, custom
|
||||
homepageImage = "img/ocean.jpg" # used in: hero, and card
|
||||
homepageImage = "img/ocean.webp" # used in: hero, and card
|
||||
showRecent = true
|
||||
showRecentItems = 9
|
||||
showMoreLink = true
|
||||
|
@ -28,7 +28,7 @@ First of all, let's start by installing and configuring the docker. For that you
|
||||
|
||||
In my case I use Unraid and a template is directly available. I just have to set the port to use for the web interface.
|
||||
|
||||

|
||||

|
||||
|
||||
Before launching the docker we will have to make several changes to the configuration file.
|
||||
|
||||
@ -104,7 +104,7 @@ docker run --rm authelia/authelia:latest authelia hash-password 'yourpassword'
|
||||
|
||||
If this does not work you can manually create the hash using this [site](https://argon2.online) and these parameters:
|
||||
|
||||

|
||||

|
||||
|
||||
### Access control
|
||||
For the access policy we will do something simple with a single role. But nothing prevents you from creating several roles with different rights and access.
|
||||
@ -284,17 +284,17 @@ real_ip_recursive on;
|
||||
|
||||
Then you can add it in the `Advanced` tab of the desired subdomain:
|
||||
|
||||

|
||||

|
||||
|
||||
After saving, go to the address of your subdomain to verify that it works. You should arrive on the following page:
|
||||
|
||||

|
||||

|
||||
|
||||
You can connect with one of the credencials you created in the `users_database.yml` file. Once the connection is done, you should be redirected to the application hosted on the subdomain!
|
||||
|
||||
To configure/modify the double authentication method, go to the subdomain you have configured for Authelia (`ex. auth.youdomain.com`). Then select `Methods`:
|
||||
|
||||

|
||||

|
||||
|
||||
## Conclusion
|
||||
|
||||
|
@ -18,7 +18,7 @@ As described in the intro one of the big problems of self hosted on a non profes
|
||||
> Dynamic DNS (DDNS) is a method of automatically updating a name server in the Domain Name System (DNS), often in real time, with the active DDNS configuration of its configured hostnames, addresses or other information.
|
||||
> — <cite>[Wikipedia](https://en.wikipedia.org/wiki/Dynamic_DNS)</cite>
|
||||
|
||||

|
||||

|
||||
|
||||
The operation of DDNS is separated into 3 parts:
|
||||
|
||||
@ -36,11 +36,11 @@ If you have a fixed IP with your internet provider, you do not need to do the fo
|
||||
|
||||
In my case I decided to use [DuckDNS](https://www.duckdns.org), it's a free service and easily configurable. First you will have to create an account with the service of your choice. Then you have to get your token, it's your unique identifier that will allow DuckDNS to identify you.
|
||||
|
||||

|
||||

|
||||
|
||||
You will now have to create a sub domain to the duckdns.org domain. To do this, simply fill in the "sub domain" field and click on "add domain".
|
||||
|
||||

|
||||

|
||||
|
||||
Then go to your docker manager to install the [linuxserver/duckdns](https://hub.docker.com/r/linuxserver/duckdns) docker. The docker compose is quite simple, you just have to indicate the two following elements:
|
||||
|
||||
@ -51,7 +51,7 @@ TOKEN=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
|
||||
|
||||
You can then launch the docker, if all is well configured you can return to DuckDNS and verify that it has received your IP:
|
||||
|
||||

|
||||

|
||||
|
||||
## Sub-domain creation
|
||||
Now that we have a domain at DuckDNS, we will have to link our personal domain/sub-domain to the DuckDNS sub-domain.
|
||||
@ -65,7 +65,7 @@ To do the redirection you have to create DNS entries of type CNAME.
|
||||
|
||||
To create a CNAME entry, all you need is a sub-domain and a target :
|
||||
|
||||

|
||||

|
||||
|
||||
In this example I create a sub-domain "www.d3vyce.fr" which redirects to the DuckDNS domain "xxxx.duckdns.org". Once the information is propagated on the different DNS servers, I should be able to access the IP of my box via this sub-domain.
|
||||
|
||||
@ -96,7 +96,7 @@ To do this we will set up a Reserve Proxy.
|
||||
> A reverse proxy server is a type of proxy server that typically sits behind the firewall in a private network and directs client requests to the appropriate backend server.
|
||||
> — <cite>[Nginx.com](https://www.nginx.com/resources/glossary/reverse-proxy-server)</cite>
|
||||
|
||||

|
||||

|
||||
|
||||
Globally the reverse proxy will inspect the source domain to determine which local IP/Port to redirect the request to.
|
||||
|
||||
@ -144,7 +144,7 @@ Password: changeme
|
||||
|
||||
After creating a user, we can add our first service! To do this go to the Hosts -> Proxy Hosts tab. Now click on "Add Proxy Host".
|
||||
|
||||

|
||||

|
||||
|
||||
This is where we will have to fill in our sub-domain, local IP of the service and its port. In the example above, I configure the sub-domain "test.d3vyce.fr" with the local web server which is at the address 192.168.1.10:80.
|
||||
|
||||
@ -156,7 +156,7 @@ The "Websockets Support" option can be the cause of problems for some applicatio
|
||||
|
||||
Now let's configure our SSL certificate.
|
||||
|
||||

|
||||

|
||||
|
||||
Select the option "Request a new SSL Certificate" then the options "Force SSL", "HTTP/2 Support" and "HSTS Enabled". Then fill in our email and accept the terms of service. You can now save. After a few seconds you should see the status "Online" for your subdomain. If you have no errors you can now access your service with this subdomain! Using the same principle, you can setup other services.
|
||||
|
||||
|
@ -24,7 +24,7 @@ https://search.google.com/u/0/search-console/welcome
|
||||
|
||||
You should land on the following page which gives us the choice between two options.
|
||||
|
||||

|
||||

|
||||
|
||||
Domain : this option will be used only if you use your domain name only for your website and you don't have any subdomain. For example in my case I have several subdomains (ex. status.d3vyce.fr) but I don't want them to be indexed.
|
||||
|
||||
@ -32,7 +32,7 @@ URL prefix : this second option allows to declare a precise URL and not an entir
|
||||
|
||||
You can then enter your domain. In my case it is https://www.d3vyce.fr. If all goes well the ownership verification of your domain is automatic. But if it's not the case, don't panic, an error message will tell you how to solve the problem. Globally, Google will provide you with a file that you should host on your site, this will verify that you have control of the site and therefore the domain.
|
||||
|
||||

|
||||

|
||||
|
||||
From this moment, the google robot will have to visit your site soon to do a scan. We could stop here but we will provide additional information to help the robot !
|
||||
|
||||
@ -42,18 +42,18 @@ The majority of CMS (Content management system) have this functionality integrat
|
||||
|
||||
In my case the sitemaps files are located at the following address: https://www.d3vyce.fr/sitemap.xml
|
||||
|
||||

|
||||

|
||||
|
||||
This link leads to a sitemap index which is itself composed of several sitemap files. We have the choice to add file by file or directly the sitemap index.
|
||||
|
||||
To add our sitemap, we must go to Index > Sitemaps. Then we add the link to our index file.
|
||||
|
||||

|
||||

|
||||
|
||||
After a few minutes we notice that our sitemap index has been detected and that our 5 sitemaps have been imported!
|
||||
|
||||

|
||||

|
||||

|
||||

|
||||
|
||||
After this step, there is nothing left to do but wait. This can take from a few hours to several weeks in some cases.---
|
||||
|
||||
@ -61,7 +61,7 @@ After this step, there is nothing left to do but wait. This can take from a few
|
||||
## Update
|
||||
After about 36 hours, my blog has been indexed on google and is now accessible with a simple search!
|
||||
|
||||

|
||||

|
||||
|
||||
I then went to see the access logs of the site and we can observe the passages of the Googlebot which scans the site:
|
||||
```
|
||||
|
@ -16,7 +16,7 @@ You have your local homelab in which you store all your data. You have set up se
|
||||
> - 1: The final “one” referred to the rule that one copy of the two backups should be taken off-site, so that anything that affected the first copy would not (hopefully) affect it.
|
||||
> — <cite>[computerweekly.com](computerweekly.com)</cite>
|
||||
|
||||

|
||||

|
||||
|
||||
3-2-1 Backup Rule allows to store data with almost no risk of loss. Today many people agree that this rule does not really make sense with the arrival of the cloud. Indeed, providers such as Google, Amazon, ... have replication systems on several disks, but especially in several locations. All this remains invisible for the user, but these securities are well present.
|
||||
|
||||
@ -89,17 +89,17 @@ In the script plugin of Unraid I add the different scripts with the following ex
|
||||
|
||||
This should result in the following:
|
||||
|
||||

|
||||

|
||||
|
||||
After waiting one day I check on the drive that the backup has been done:
|
||||
|
||||

|
||||

|
||||
|
||||
And indeed there is an archive of about 1Gb that has been uploaded in the save_mark1 folder. The system works !
|
||||
|
||||
I then let the script run for several days to see if the history system works well. As you can see I have a history of the file for about 30 days. An interesting thing to know is that only the archive consumes space on my drive and not all the versions in the history. This makes it consume ~1Gb with 30 versions of the archive accessible.
|
||||
|
||||

|
||||

|
||||
|
||||
## Conclusion
|
||||
|
||||
|
BIN
content/posts/migrate-from-ghost-to-hugo/img/image-1.png
(Stored with Git LFS)
Normal file
BIN
content/posts/migrate-from-ghost-to-hugo/img/image-1.png
(Stored with Git LFS)
Normal file
Binary file not shown.
BIN
content/posts/migrate-from-ghost-to-hugo/img/image-1.webp
(Stored with Git LFS)
Normal file
BIN
content/posts/migrate-from-ghost-to-hugo/img/image-1.webp
(Stored with Git LFS)
Normal file
Binary file not shown.
BIN
content/posts/migrate-from-ghost-to-hugo/img/image-2.png
(Stored with Git LFS)
Normal file
BIN
content/posts/migrate-from-ghost-to-hugo/img/image-2.png
(Stored with Git LFS)
Normal file
Binary file not shown.
BIN
content/posts/migrate-from-ghost-to-hugo/img/image-2.webp
(Stored with Git LFS)
Normal file
BIN
content/posts/migrate-from-ghost-to-hugo/img/image-2.webp
(Stored with Git LFS)
Normal file
Binary file not shown.
@ -3,7 +3,149 @@ title: "Migrate from Ghost to Hugo"
|
||||
date: 2024-02-17
|
||||
draft: true
|
||||
slug: "migrate-from-ghost-to-hugo"
|
||||
tags: ["ci/cd", "docker", "hugo"]
|
||||
tags: ["ci/cd", "docker", "git", "hugo"]
|
||||
type: "programming"
|
||||
---
|
||||
|
||||
## Current solution
|
||||
I've had a blog since early 2022. Historically, I chose Ghost to make this site. At the time, several factors led me to choose this CMS rather than another:
|
||||
- Ease of deployment and use
|
||||
- Online administration page and article editor
|
||||
- Simple, modern theme
|
||||
- Regular updates
|
||||
|
||||
But after using it for quite some time, a number of problems have arisen that can't really be corrected:
|
||||
- Limited customization
|
||||
- Complex theme modification if you want to be able to update it afterwards
|
||||
- Significant resource requirements for Ghost+Mysql (see Conclusion)
|
||||
- No options for advanced content organization
|
||||
- Too many unused options to justify this choice (subscription, user account, etc.)
|
||||
|
||||
All these problems, combined with the fact that my needs had evolved, led me to change the technical solution for my blog.
|
||||
|
||||
## Choosing a new solution
|
||||
Today, there are many options for blogging. Third-party hosted options like Medium, CMS like Wordpress and Ghost, but also static websites generators. For this new version of my blog, I've opted for a static websites generator.
|
||||
|
||||
Here again, there are several solutions, but I've settled on [Hugo](https://gohugo.io/).
|
||||
|
||||
Hugo is a GO-based opensource framewok created in 2013. It's known for being very fast, highly customizable and with a very active community.
|
||||
|
||||
After choosing Hugo I had to choose a theme, I had several requirements in terms of features. I ended up choosing [Blowfish](https://blowfish.page/).
|
||||
|
||||
It's a highly flexible and customizable theme, regularly updated, with a minimalist, modern design.
|
||||
|
||||
## Migration
|
||||
### Settings
|
||||
|
||||
|
||||
### Contents and optimization
|
||||
|
||||
```bash
|
||||
find ./content/ -type f -name '*.png' -exec sh -c 'cwebp -q 90 $1 -o "${1%.png}.webp"' _ {} \;
|
||||
```
|
||||
https://pawelgrzybek.com/webp-and-avif-images-on-a-hugo-website/
|
||||
|
||||
## Storage and automatic deployment
|
||||
### Git-LFS
|
||||
|
||||
```bash
|
||||
apt install git-lfs
|
||||
git lfs install
|
||||
git lfs migrate \
|
||||
import \
|
||||
--include="*.jpg,*.svg,*.ttf,*.woff*,*.min.*,*.webp,*.ico,*.png,*.jpeg" \
|
||||
--include-ref=refs/heads/main
|
||||
```
|
||||
|
||||
### CI/CD
|
||||
|
||||
hugo.Dockerfile:
|
||||
```dockerfile
|
||||
FROM golang:1.22-alpine AS build
|
||||
|
||||
ARG CGO=1
|
||||
ENV CGO_ENABLED=${CGO}
|
||||
ENV GOOS=linux
|
||||
ENV GO111MODULE=on
|
||||
|
||||
RUN apk update && \
|
||||
apk add --no-cache gcc musl-dev g++ git
|
||||
RUN go install -tags extended github.com/gohugoio/hugo@v0.122.0
|
||||
```
|
||||
I then use the following commands to build the Hugo image:
|
||||
```
|
||||
docker build -t git.d3vyce.fr/d3vyce/hugo:latest -f hugo.Dockerfile .
|
||||
docker push git.d3vyce.fr/d3vyce/hugo:latest
|
||||
```
|
||||
|
||||
Dockerfile:
|
||||
```dockerfile
|
||||
# Build Stage
|
||||
FROM git.d3vyce.fr/d3vyce/hugo:latest AS build
|
||||
|
||||
WORKDIR /opt/blog
|
||||
COPY . /opt/blog/
|
||||
|
||||
RUN git submodule update --init --recursive
|
||||
RUN hugo
|
||||
|
||||
# Publish Stage
|
||||
FROM nginx:1.25-alpine
|
||||
|
||||
WORKDIR /usr/share/nginx/html
|
||||
COPY --from=build /opt/blog/public /usr/share/nginx/html/
|
||||
COPY nginx/ /etc/nginx/
|
||||
|
||||
EXPOSE 80/tcp
|
||||
```
|
||||
|
||||
```yaml
|
||||
name: Build Blog Docker Image
|
||||
|
||||
on:
|
||||
push:
|
||||
branches:
|
||||
- main
|
||||
|
||||
jobs:
|
||||
build docker:
|
||||
runs-on: linux_amd
|
||||
steps:
|
||||
- name: checkout code
|
||||
uses: actions/checkout@v3
|
||||
# with:
|
||||
# lfs: 'true'
|
||||
- name: Checkout LFS
|
||||
run: |
|
||||
function EscapeForwardSlash() { echo "$1" | sed 's/\//\\\//g'; }
|
||||
readonly ReplaceStr="EscapeForwardSlash ${{ gitea.repository }}.git/info/lfs/objects/batch"; sed -i "s/\(\[http\)\( \".*\)\"\]/\1\2`$ReplaceStr`\"]/" .git/config
|
||||
git config --local lfs.transfer.maxretries 1
|
||||
/usr/bin/git lfs fetch origin refs/remotes/origin/${{ gitea.ref_name }}
|
||||
/usr/bin/git lfs checkout
|
||||
- name: Set up QEMU
|
||||
uses: docker/setup-qemu-action@v2
|
||||
- name: Set up Docker Buildx
|
||||
uses: docker/setup-buildx-action@v2
|
||||
- name: Login to Docker registry
|
||||
uses: docker/login-action@v2
|
||||
with:
|
||||
registry: git.d3vyce.fr
|
||||
username: ${{ github.actor }}
|
||||
password: ${{ secrets.GIT_TOKEN }}
|
||||
- name: Build and push
|
||||
uses: docker/build-push-action@v4
|
||||
with:
|
||||
context: .
|
||||
file: ./Dockerfile
|
||||
platforms: linux/amd64
|
||||
push: true
|
||||
tags: git.d3vyce.fr/${{ github.repository }}:latest
|
||||
```
|
||||
|
||||
https://gitea.com/gitea/act_runner/issues/164
|
||||
|
||||
## Conclusion: before/after comparison
|
||||
|
||||

|
||||
|
||||

|
||||
|
@ -44,7 +44,7 @@ Currently my homelab is composed of the following elements:
|
||||
---
|
||||
|
||||
## Topology
|
||||

|
||||

|
||||
|
||||
In terms of network architecture it is quite simple, there is only one subnet, the 10.0.0.0/24, that I have subdivided for the different equipment:
|
||||
- 10.0.0.1 : Unifi Dream Machine
|
||||
@ -58,7 +58,7 @@ The vast majority of services/VM/storage are on the Mark1 server. This server is
|
||||
|
||||
[Unraid](https://unraid.net) is a paid OS that is offered in 3 versions:
|
||||
|
||||

|
||||

|
||||
|
||||
The only difference is the number of storage devives we can install in our server. In my case I am on the "Plus" version. It's a one time payment that allows you to unlock all the features.
|
||||
|
||||
@ -74,17 +74,17 @@ Mark2 is a server under Ubuntu Server, it is notably used for game servers (Mine
|
||||
As you can see on the diagram there are many services running in my homelab. Most of them are on the "Mark1" server and are Dockers.
|
||||
| | Name | Description |
|
||||
| ----------- | ----------- | ----------- |
|
||||
| <img src="icon/radarr.png" alt="radarr" width="50"/> | Radarr | Movie collection manager |
|
||||
| <img src="icon/sonarr.png" alt="sonarr" width="50"/> | Sonarr | Series collection manager |
|
||||
| <img src="icon/bazzar.png" alt="bazzar" width="50"/> | Bazzar | Subtittle finder for movie and series |
|
||||
| <img src="icon/jackett.png" alt="jackett" width="50"/> | Jackett | Proxy server for indexer |
|
||||
| <img src="icon/adguardhome.png" alt="adguardhome" width="50"/> | AdGuardHome | DNS for blocking ads and tracking |
|
||||
| <img src="icon/bitwarden.png" alt="bitwarden" width="50"/> | Bitwarden | Password manager |
|
||||
| <img src="icon/deluge.png" alt="deluge" width="50"/> | Deluge | Torrent downloader |
|
||||
| <img src="icon/gitea.png" alt="gitea" width="50"/> | Gitea | Local github |
|
||||
| <img src="icon/homeassistant.png" alt="homeassistant" width="50"/> | Home Assistant | IOT manager (Zigbee) |
|
||||
| <img src="icon/nginxproxymanager.png" alt="nginxproxymanager" width="50"/> | Nginx Proxy Manager | Reverse Proxy |
|
||||
| <img src="icon/plex.png" alt="plex" width="50"/> | Plex | Movie and series remote access |
|
||||
| <img src="icon/radarr.webp" alt="radarr" width="50"/> | Radarr | Movie collection manager |
|
||||
| <img src="icon/sonarr.webp" alt="sonarr" width="50"/> | Sonarr | Series collection manager |
|
||||
| <img src="icon/bazzar.webp" alt="bazzar" width="50"/> | Bazzar | Subtittle finder for movie and series |
|
||||
| <img src="icon/jackett.webp" alt="jackett" width="50"/> | Jackett | Proxy server for indexer |
|
||||
| <img src="icon/adguardhome.webp" alt="adguardhome" width="50"/> | AdGuardHome | DNS for blocking ads and tracking |
|
||||
| <img src="icon/bitwarden.webp" alt="bitwarden" width="50"/> | Bitwarden | Password manager |
|
||||
| <img src="icon/deluge.webp" alt="deluge" width="50"/> | Deluge | Torrent downloader |
|
||||
| <img src="icon/gitea.webp" alt="gitea" width="50"/> | Gitea | Local github |
|
||||
| <img src="icon/homeassistant.webp" alt="homeassistant" width="50"/> | Home Assistant | IOT manager (Zigbee) |
|
||||
| <img src="icon/nginxproxymanager.webp" alt="nginxproxymanager" width="50"/> | Nginx Proxy Manager | Reverse Proxy |
|
||||
| <img src="icon/plex.webp" alt="plex" width="50"/> | Plex | Movie and series remote access |
|
||||
|
||||
|
||||
|
||||
@ -92,7 +92,7 @@ In addition to these services, I have two database managers: MariaDB and Redis.
|
||||
|
||||
In terms of VMs on Mark1, I have 2 Ubuntu VMs for web hosting. A GNS3 VM for network tests. A VM containing Home Assistant. A Debian VM for a Docker project in progress and a Kali VM to do Pentesting and have access to cyber tools in remote.
|
||||
|
||||

|
||||

|
||||
|
||||
---
|
||||
|
||||
|
Loading…
x
Reference in New Issue
Block a user