If you get your domain from OVH, you get one single mailbox (be it with a lot of aliases, like a different email-address for every service/website you use) for free.
If you get your domain from OVH, you get one single mailbox (be it with a lot of aliases, like a different email-address for every service/website you use) for free.
What is your ‘deleted files’ policy? How long do you keep them? I had a similar issue but then found out that the nextcloud cron-process wasn’t running so files in the ‘deleted files’ folder where never really deleted.
Well, based on advice of Samsy, take a backup of home-server network to a NAS on your home-network. (I do home that your server-segment and your home-segment are two seperated networks, no?) Or better, set up your NAS at a friend’s house (and require MFA or a hardware security-key to access it remotely)
What was that saying again?
“the biggest thread to the safety and cybersecurity of the citizens of a country … are managers who think that cybersecurity is just a number on an exellsheet”
(I don’t know where I read this, but I think it really hits the nail on the head)
I have been thinking the same thing.
I have been looking into a way to copy files from our servers to our S3 backup-storage, without having the access-keys stored on the server. (as I think we can assume that will be one of the first thing the ransomware toolkits will be looking for).
Perhaps a script on a remote machine that initiate a ssh to the server and does a “s3cmd cp” with the keys entered from stdin ? Sofar, I have not found how to do this.
Does anybody know if this is possible?
Yes. Fair point.
On the other hand, most of the disaster senarios you mention are solved by geographic redundancy: set up your backup // DRS storage in a datacenter far away from the primary service. A scenario where all services,in all datacenters managed by a could-provider are impacted is probably new.
It is something that, considering the current geopolical situation we are now it, -and that I assume will only become worse- that we should better keep in the back of our mind.
I will put “multicloud” on my wishlist.
Looking at it from a infosec point of view, cloud-providers are an ideal target. All the customers who have just lost all their data now complaining to the cloud-provider are the ideal pressure-mechanism to get the cloud-provider to pay out.
In this case, it is not you -as a customer- that gets hacked, but it was the cloud-company itself. The randomware-gang encrypted the disks on server level, which impacted all the customers on every server of the cloud-provider.
The issue is not cloud vs self-hosted. The question is “who has technical control over all the servers involved”. If you would home-host a server and have a backup of that a network of your friend, if your username / password pops up on a infostealer-website, you will be equaly in problem!
Well, the issue here is that your backup may be physically in a different location (which you can ask to host your S3 backup storage in a different datacenter then the VMs), if the servers themselfs on which the service (VMs or S3) is hosted is managed by the same technical entity, then a ransomware attack on that company can affect both services.
So, get S3 storage for your backups from a completely different company?
I just wonder to what degree this will impact the bandwidth-usage of your VM if -say- you do a complete backup of your every day to a host that will be comsidered as “of-premises”
First of all, thanks to all who replied! I didn’t think there would have been that many people who self-host a SSO-server, so I am happy to see these replies.
As a side-note, I have also been looking into making the setup more robust, i.e. add redundancy. For a “light redundant” senario (not fully automatic, but -say- where I have a 2nd instance ready to run, so I just need to adapt the DNS-record if it is needed), can I conclude from the “makeing a backup” question, that I just need to run a 2nd instance of postgres and do streaming-replication from the main instance to the backup-instance ?
Or are there other caviats I haven’t thought about?
Great thanks! (also thanks to Mike … you have some valid points)
For me, the first goal is to simply understand the setup. I now have been able to create a setup with two frontend jvb-instances and one backend. In the end, the architecture setup of a jitsi-server is quite nicely explained, and -by delving a little bit into the startup scripts of the docker-based jitsi setup, you do get some idea of how things fit together.
From a practicle point of view, I think I’ll go for the basic setup (1 backend, 2 frontends) natively on two servers, and -if the backend server would go down- just have a dockerised backup-setup ready to go if it would be needed.
Thanks!
A /48 is quite overkill for a home customer. Do you have 65536 LANs at home? Here in Belgium, we get a /56.
Yes, that’s a very useful idea. Thanks!