• 3 Posts
  • 40 Comments
Joined 2 years ago
cake
Cake day: November 15th, 2023

help-circle
  • But the estimation is with each NC instance with half a CPU and 1GB of memory. This is a super conservative estimation, that doesn’t include anything besides a tiny Fargate deployment and Aurora instances.

    Edit: fargate ($40/month), the tiniest Aurora instances at 20% utilization and with merely 50GB storage ($120/month). Missing s3, which will easily cost $50 in storage and transfer (for only a few TB), ALBs and network traffic, especially outbound (easily $50-100 depending on volumes).

    This basic solution’s real cost is already between $150 and $300/month. I don’t know NC enough to understand volumes on DBs and all usage, but I assume that it’s going to be lots of data in and out (backups, media, etc.). —edit—

    For a heavily used NC instance (assuming a company offering it as a service), the cost is going to become massive pretty fast.

    Also, as I side note, if a company is offering NC as a service, but doesn’t manage a single piece of NC deployment… What is the company product? And most importantly, how are they going to make money when AWS is going to eat a linearly scalable chunk of their revenue forever?


  • Well yeah, wouldn’t break the bank, but a conservative cost estimate (without considering network costs, for example, quite relevant for a data intensive app) would bring this setup to about $40/month. That is about 5 times more expensive than a VPC with 4x the resources.

    OP said this is some sort of “enterprise self-hosting” solution, which I guess then kind of makes sense. For a company providing nextcloud as a service I would never vendor lock myself and let AWS take a huge chunk of my revenue forever, but I can imagine folks have different opinions.


  • In that case, Pulumi permissions are too broad IMHO for what it has to do, an enterprise should adhere to least privilege. Likewise, as I wrote in another comment, the egress security groups are unclear to me (why any traffic at all is needed?) and the image consumed should be pinned to a digest. Or better yet, should be coming from a private enterprise registry, ideally with an attestation that can be verified at runtime.

    I am not sure ECS Fargate makes sense vs an ec2 instance to run the workload. This setup alone will cost about $30/month assuming half a vCPU per replica with Fargate, plus about $12 for the memory (1GB/task). 2xt2.micro could be run for ~$20 without even considering reservation discounts etc. Obviously the gap will become even larger at scale, which I suppose might be very interesting for an enterprise.



  • Oh yeah, I am aware. Mostly here I would question the idea to have multi-AZ redundancy and using a manage service for DB (which indeed is expensive). All of this when a 5$ VPS could host the same (maybe still using s3 for storage) and accept the few hours downtime in the rare event your VPS explodes and you need to restore it from a backup.

    So from my PoV this is absolutely overkill but I concede that it depends a lot on the requirements. I can’t ever imagine having requirements so tight that need such infra to run (in fact, I think not even most businesses have these requirements, I have written on the topic at https://loudwhisper.me/blog/hating-clouds/) for my personal stuff…


  • Everyone is free to pick their poison, but I have to ask…why? What is the target audience here? This is a massively overkill architecture IMHO. Not to talk about the fact you now need 3 managed services (fargate, s3 and aurora at least) for a single self hosted tool, and that is being generous (not counting cloudwatch, ALBs, etc.).

    • Why do you need security groups to allow egress anywhere (or, at all)?
    • I would pin the image to a digest, rather than using latest.
    • what is the average monthly cost for this infra for you?







  • Over the years I’ve heard many people claim that proton’s servers being in Switzerland is more secure than other EU countries

    Things change. They are doing it because Switzerland is proposing legislation that would definitely make that claim untrue. Europe is no paradise, especially certain countries, but it still makes sense.

    From the lumo announcement:

    Lumo represents one of many investments Proton will be making before the end of the decade to ensure that Europe stays strong, independent, and technologically sovereign. Because of legal uncertainty around Swiss government proposals(new window) to introduce mass surveillance — proposals that have been outlawed in the EU — Proton is moving most of its physical infrastructure out of Switzerland. Lumo will be the first product to move.

    This shift represents an investment of over €100 million into the EU proper. While we do not give up the fight for privacy in Switzerland (and will continue to fight proposals that we believe will be extremely damaging to the Swiss economy), Proton is also embracing Europe and helping to develop a sovereign EuroStack(new window) for the future of our home continent. Lumo is European, and proudly so, and here to serve everybody who cares about privacy and security worldwide.


  • They actually don’t explain it in the article. The author doesn’t seem to understand why there is a claim of e2e chat history, and zero-access for chats. The point of zero access is trust. You need to trust the provider to do it, because it’s not cryptographically veritable. Upstream there is no encryption, and zero-access means providing the service (usually, unencrypted), then encrypting and discarding the plaintext.

    Of course the model needs to have access to the context in plaintext, exactly like proton has access to emails sent to non-PGP addresses. What they can do is encrypt the chat histories, because these don’t need active processing, and encrypt on the fly the communication between the model (which needs plaintext access) and the client. The same is what happens with scribe.

    I personally can’t stand LLMs, I am waiting eagerly for this bubble to collapse, but this article is essentially a nothing burger.



  • Because it’s unnecessary in almost all cases. So far there is only one community which forbids people to comment based on who they are, but otherwise the rules boil down to standard acceptable behavior according to common sense. It’s also a nuisance for users: I am quite sure nobody wants to click several times and be derailed to check rules (on mobile) for every comment they want to write in every post they see on a feed. If this would be expected as standard behavior, I would guess even less interactions will happen.


  • Based on the comments here and in the previous similar post I have seen, the vast, vast majority of people (presumably men) highlight how this is a problem of visibility of posts in public feeds.

    It’s a tradeoff between having the community public for discoverability and accepting that many people will not check the rules and violate them, some inadvertently.

    The alternative is to make the community private, and accept that women will need to discover a women-relates community by searching for “women”, which doesn’t seem incredibly unlikely.

    From the sentiments I read, most people wouldn’t care at all if the community was private and wouldn’t have a desire to “invade” it. I definitely feel part of this group.

    Considering that it’s in the interest of the community (apparently) to have only women, I think it’s fair to expect the (minimal) effort from future members to look for it (plus advertising it in posts etc.) on them instead of expecting the vast majority of the users (the fediverse is mostly males) to add friction and having to check the rules of every single community of every post they open (now it might be a community, more might come). Yes, community rules are important, but being realistic, if you don’t behave like an asshole you don’t need to worry about them in 99% of the times.

    However, if this tradeoff is not deemed acceptable, I think there is no point complaining about people “invading” women spaces because it’s guaranteed that many people will comment without reading the rules, as I am sure the almost totality of users does all the time. Even without counting the ones who intentionally violate the rule, there is always going to be an organic amount of people who will do so inadvertently.

    At this point I think the tradeoff is so clear, that discussing the topic in such a confrontational way looks more like rage-bait than anything aimed at solving the problem.



  • Hey, I haven’t, but to be honest, the answers I got from most companies showed me that the processes were handled by people who barely understood the legal and technical aspects around data collection (e.g., often support agents were on the other side of privacy@), which means I wouldn’t trust them with their answer anyway AND I doubt many of these companies will have effective way to even check that.

    From the data being sold point of view, I think unfortunately it’s way more effective reaching out to the few big data brokers to request cancelations or pay one of the companies who offer such service…