

Served in the Krogan uprisings. Now I run a podcast




The ip shouldnt change unless the server is down for a period of time and the ip is dynamic.
If it is returning OK then it sounds like duckdns is working as intended


I have been using duckdns for a few years without issues. It should be simple enough , just set up a cron job with your details as listed on their site where you configure it. This keeps your dns entry up to date.


If you want a quick and easy way to share the odd file you could set up a syncthing shared folder and COPY things in to it that you want to share. When the other side MOVES them out of the shared folder they will be removed from the shared folder on your side.
The advantage of this is security. No access is given to your system. If your friends box is compromised you dont have an nfs mount or ssh key on their machine. The worst that can be done to you is copies in the shared folder are removed or malicious files are placed in the shared folder but they wont be able to execute.
You also dont need to open any ports for syncthing , it will use relays if it cant make a direct connection.


Enshittification intensifies


Lowest barrier to entry


Not for the right reasons but still a win


Obviously


Clarified my point in the reply above .


It was just an example to illustrate the point. I use specific convertors for actual format conversions. Actual uses have been map it to a custom data model .
You are right though , right tool for the job and all that.


I use it now and again but not integrated into an ide and not to write large bits of code.
My uses are like so
Rewrite this rant to shut the PO/PM up. Explain why this is a waste of time
Convert this excel row into a custom model.
Given these tables give me the sql to do xyz.
Sometimes for troubleshooting an environment issue
Do I need it , no. But if it saves me some time on bullshit tasks then thats more time for me
Well i have the rp as i only want one port exposed. I have separate networks per service too to isolate things. Only the things that need to talk to each other can.
My stuff is only accessible on the lan and via the vpn and even then only certain ips have access to certain things.
In your case it might be different , but generally a reverse proxy is better as you can have a single point of access to secure and you are not exposing all of your ports to the host or the internet.


You can use a ddns such as duckdns or host on github pages with jekyll or something


Debian on the host and everything else in containers
I have the arr stack connected to gluetun doing its thing and then wireguard on the host. I only expose my reverse proxy to the host and can connect to the services through that.
Note the networks below, vpn_net allows it to talk to the gluetun network which has the other stuff. The gluetun and arr stuff are in a separate compose file that defines the network. Then the non vpn stuff connects to that network when it comes up
nginx:
image: nginx:1.25.4-alpine-slim
container_name: nginx
restart: always
volumes:
- /etc/letsencrypt/:/etc/letsencrypt/
- ./nginx/nginx.conf:/etc/nginx/nginx
- ./nginx/conf/:/etc/nginx/conf.d/:ro
- ./nginx/htpasswd:/etc/apache2/.htpasswd:ro
- /var/log/nginx:/var/log/nginx/
- ./www/html/:/var/www/html/:ro
- ./content/Movies:/var/www/media/Movies:ro
- ./content/Shows:/var/www/media/Shows:ro
ports:
- 443:443
security_opt:
- no-new-privileges
networks:
- reverse-proxy_service1
- reverse-proxy_serviceN
- vpn-stack_vpn-net
depends_on:
- service1
- serviceN


É isso aí meu fi .


And when a competitor arises they just buy them and repeat the process


I do monthly backups with cron and tar and syncthing for my containers.
I do quarterly backups of my server (14TB) to external USB HDDs. This is done via a script that mounts the drives, runs rsync to copy, then unmounts the drives again and emails me when it is done. I dont bother encrypting them as it ia mainly just media.