

Care to elaborate? Proxmox’s paid tier is long term support for their older releaes, and paid support. The main code is entirely free, with no features gated behind paywalls or anything like that.


Care to elaborate? Proxmox’s paid tier is long term support for their older releaes, and paid support. The main code is entirely free, with no features gated behind paywalls or anything like that.


I don’t see any mention of games so far.
A minecraft server is always a good time with friends, and there are hundreds of other game servers you can self host.
I don’t know what the commenter you replied to is talking about, but systemd has it’s own firewalling and sandboxing capabilities. They probably mean that they don’t use docker for deployment of services at all.
Here is a blogpost about systemd’s firewall capabilities: https://www.ctrl.blog/entry/systemd-application-firewall.html
Here is a blogpost about systemd’s sandboxing: https://www.redhat.com/en/blog/mastering-systemd
Here is the archwiki’s docs about drop in units: https://wiki.archlinux.org/title/Systemd#Drop-in_files
I can understand why someone would like this, but this seems like a lot to learn and configure, whereas podman/docker deny most capabilities and network permissions by default.


Corporations really, really love being admin on everybody elses devices. See kernel level anticheat.
I feel like people have gotten zero trust (I don’t need to trust anybody) confused with “I don’t trust anybody”.
I was listening to a podcast by packet pushers and they were like “So you meet a vendor, and they are like, ‘So what do you think zero trust means? We can work with that’”.


I don’t really understand why this is a concern with docker. Are there any particular features you want from version 29 that version 26 doesn’t offer?
The entire point of docker is that it doesn’t really matter what version of docker you have, the containers can still run.
Debian’s version of docker receives security updates in a timely manner, which should be enough.


You are adding a new repo, but you should know that the debian repos already contain docker (via docker.io) and docker-compose.


https://music.youtube.com/playlist?list=PLXW0UFGge4IU4SZE1rsCkKlVbmYgATN96
My curated playlist of entirely “nerdcore”, which is music inspired by (but not directly from), anime and video games.


I use authentik, which emables single sign on (the same account) between services.
Authentik is a bit complex and irritating at times, so I would recommend voidauth or kanidm as alternatives for most self hosters.


This is a very good writeup.
Do you think supabase or other similar solutions also have these pitfalls?


There are a few apps that I think fit this use case really well.
Languagetool is a spelling and grammer checker that has a server client model. Libreoffice now has built in languagetool integration, where it can acess a server of your choosing. I make it access the server I run locally, since archlinux packages languagetool.
Another is stirling-pdf. This is a really good pdf manipulation program that people like, that comes as a server with a web interface.


I think I have also seen socket access in Nginx Proxy Manager in some example now. I don’t really know the advantages other than that you are able to use the container names for your proxy hosts instead of IP and port
I don’t think you need socket access for this? This is what I did: https://stackoverflow.com/questions/31149501/how-to-reach-docker-containers-by-name-instead-of-ip-address#35691865


I’ve seen three cases where the docker socket gets exposed to the container (perhaps there are more but I haven’t seen any?):
Watchtower, which does auto updates and/or notifies people
Nextcloud AIO, which uses a management container that controls the docker socket to deploy the rest of the stuff nextcloud wants.
Traefik, which reads the docker socket to automatically reverse proxy services.
Nextcloud does the AIO, because Nextcloud is a complex service, but it grows to be very complex if you want more features or performance. The AIO handles deploying all the tertiary services for you, but something like this is how you would do it yourself: https://github.com/pimylifeup/compose/blob/main/nextcloud/signed/compose.yaml . Also, that example docker compose does not include other services, like collabara office, which is the google docs/sheets/slides alternative, a web based office.
Compare this to the kubernetes deployment, which yes, may look intimidating at first. But actually, many of the complexities that the docker deploy of nextcloud has are automated away. Enabling the Collabara office is just collabara.enabled: true in the configuration of it. Tertiary services like Redis or the database, are included in the Kubernetes package as well. Instead of configuring the containers itself, it lets you configure the database parameters via yaml, and other nice things.
For case 3, Kubernetes has a feature called an “Ingress”, which is essentially a standardized configuration for a reverse proxy that you can either separate out, or one is provided as part of the packages. For example, the nextcloud kubernetes package I linked above, has a way to handle ingresses in the config.
Kubernetes handles these things pretty well, and it’s part of why I switched. I do auto upgrade, but I only auto upgrade my services, within the supported stable release, which is compatible for auto upgrades and won’t break anything. This enables me to get automatic security updates for a period of time, before having to do a manual and potentially breaking upgrade.
TLDR: You are asking questions that Kubernetes has answers to.


Try the yaml language server by red hat, it comes with a docker compose validator.
But in general, off the top of my head, dashes = list. No dashes is a dictionary.
So this is a list:
thing:
- 1
- 2
And this is a dictionary:
dict:
key1: value1
key2: value2
And then when they can be combined into a list of dictionaries.
listofdicts:
- key1dict1: value1dict1
- key1dict2: value1dict2
key2dict2: value2dict2
And then abother thing to note is that yaml wilL convert things into a string. So if you have ports 8080:80, this will be converted into a string, which is a clue that this is a string in a list, rather than a dictionary.


Signal’s reproducible builds are broken: https://github.com/signalapp/Signal-Android/issues/13565


No, the duckstation dev obtained the consent of contributors and/or rewrote all GPL code.
I have the approval of prior contributors, and if I did somehow miss you, then please advise me so I can rewrite that code. I didn’t spend several weekends rewriting various parts for no reason. I do not have, nor want a CLA, because I do not agree with taking away contributor’s copyright.


Many helm charts, like authentik or forgejo integrate bitnami helmcharts for their databases. So that’s why this is concerning to me,
But, I was planning to switch to operators like cloudnativepostgres for my databases instead and disable the builtin bitnami images. When using the builtin bitnami images, automatic migration between major releases is not supported, you have to do it yourself manually and that dissapointed me.


I’m on my phone rn and can’t write a longer post. This comment is to remind me to write an essay later. I’ve been using authentik heavily for my cybersecurity club and have a LOT of thoughts about it.
The tldr about authentik’s risk of enshittification is that authentik follows a pattern I call “supportware”. It’s when extremely (intentionally/accidentally) complex software (intentionally/accidentally) lacks edge cases in their docs,because you are supposed to pay for support.
I think this is a sustainable business model, and I think keycloak has some similar patterns (and other Red Hat software).
The tldr about authentik itself is that it has a lot of features, but not all of them are relevant to your usecase, or worth the complexity. I picked up authentik for invites (which afaik are rare, also official docs about setting up invites were wrong, see supportware), but invites may not something you care about.
Anyway. Longer essay/rant later. Despite my problems, I still think authentik is the best for my usecase (cybersecurity club), and other options I’ve looked at like zitadel (seems to be more developer focused),or ldap + sso service (no invites afaik) are less than the best option.
Sidenote: Microsoft entra is offers similar features to what I want from authentik, but I wanted to self host everything.


So Signal does not have reproducible builds, which are very concerning securitywise. I talk about it in this comment: https://programming.dev/post/33557941/18030327 . The TLDR is that no reproducible builds = impossible to detect if you are getting an unmodified version of the client.
Centralized servers compound these security issues and make it worse. If the client is vulnerable to some form of replacement attack, then they could use a much more subtle, difficult to detect backdoor, like a weaker crypto implementation, which leaks meta/userdata.
With decentralized/federated services, if a client is using other servers other than the “main” one, you either have to compromise both the client and the server, or compromise the client in a very obvious way that causes the client to send extra data to server’s it shouldn’t be sending data too.
A big part of the problem comes with what Github calls “bugdoors”. These are “accidental” bugs that are backdoors. With a centralized service, it becomes much easier to introduce “bugdoors” because all the data routes through one service, which could then silently take advantage of this bug on their own servers.
This is my concern with Signal being centralized. But mostly I’d say don’t worry about it, threat model and all that.
I’m just gonna @ everybody who was in the conversation. I posted this top level for visibility.
@Ulrich@feddit.org @rottingleaf@lemmy.world @jet@hackertalks.com @eleitl@lemmy.world @Damage@feddit.it
EDIT: elsewhere in the thread it is talked about what is probably a nation state wiretapping attempt on an XMPP service: https://www.devever.net/~hl/xmpp-incident
For a similar threat model, signal is simply not adequate for reasons I mentioned above, and that’s probably what poqVoq was referring to when he mentioned how it was discussed here.
The only timestamps shared are when they signed up and when they last connected. This is well established by court documents that Signal themselves share publicly.
This of course, assumes I trust the courts. But if I am seeking maximum privacy/security, I should not have to do that.
Proxmox is based on debian and uses debian under the hood…