• 0 Posts
  • 45 Comments
Joined 2 years ago
cake
Cake day: July 2nd, 2023

help-circle
  • litchralee@sh.itjust.workstoSelfhosted@lemmy.worldPassword managers...
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    10 hours ago

    For a single password, it is indeed illogical to distribute it to others, in order to prevent it from being stolen and misused.

    That said, the concept of distributing authority amongst others is quite sound. Instead of each owner having the whole secret, they only have a portion of it, and a majority of owners need to agree in order to combine their parts and use the secret. Rather than passwords, it’s typically used for cryptographically signing off on something’s authenticity (eg software updates), where it’s known as threshold signatures:

    Imagine for a moment, instead of having 1 secret key, you have 7 secret keys, of which 4 are required to cooperate in the FROST protocol to produce a signature for a given message. You can replace these numbers with some integer t (instead of 4) out of n (instead of 7).

    This signature is valid for a single public key.

    If fewer than t participants are dishonest, the entire protocol is secure.




  • https://ipv6now.com.au/primers/IPv6Reasons.php

    Basically, Legacy IP (v4) is a dead end. Under the original allocation scheme, it should have ran out in the early 1990s. But the Internet explosion meant TCP/IP(v4) was locked in, and so NAT was introduced to stave off address exhaustion. But that caused huge problems to this day, like mismanagement of firewalls and the need to do port-forwarding. It also broke end-to-end connectivity, which requires additional workarounds like STUN/TURN that continue to plague gamers and video conferencing software.

    And because of that scarcity, it’s become a land grab where rich companies and countries hoard the limited addresses in circulation, creating haves (North America, Europe) and have-nots (Africa, China, India).

    The want for v6 is technical, moral, and even economical: one cannot escape Big Tech or American hegemony while still having to buy IPv4 space on the open market. Czechia and Vietnam are case studies in pushing for all-IPv6, to bolster their domestic technological familiarity and to escape the broad problems with Business As Usual.

    Accordingly, there are now three classes of Internet users: v4-only, dual-v4-and-v6, and v6-only. Surprisingly, v6-only is very common now on mobile networks for countries that never had many v4 addresses. And it’s an interop requirement for all Apple apps to function correctly in a v6-only environment. At a minimum, everyone should have access to dual-stack IP networks, so they can reach services that might be v4-only or v6-only.

    In due course, the unstoppable march of time will leave v4-only users in the past.


  • You might also try asking on !ipv6@lemmy.world .

    Be advised that even if a VPN offers IPv6, they may not necessarily offer it sensibly. For example, some might only give you a single address (aka a routed /128). That might work for basic web fetching but it’s wholly inadequate if you wanted the VPN to also give addresses to any VMs, or if you want each outbound connection to use a unique IP. And that’s a fair ask, because a normal v6 network can usually do that, even though a typical Legacy IP network can’t.

    Some VPNs will offer you a /64 subnet, but their software might not check if your SLAAC-assigned address is leaking your physical MAC address. Your OS should have privacy-extensions enabled to prevent this, but good VPN software should explicitly check for that. Not all software does.


  • Connection tracking might not be totally necessary for a reverse proxy mode, but it’s worth discussing what happens if connection tracking is disabled or if the known-connections table runs out of room. For a well-behaved protocol like HTTP(S) that has a fixed inbound port (eg 80 or 443) and uses TCP, tracking a connection means being aware of the TCP connection state, which the destination OS already has to do. But since a reverse proxy terminates a TCP connection, then the effort for connection tracking is minimal.

    For a poorly-behaved protocol like FTP – which receives initial packets in a fixed inbound port but then spawns a separate port for outbound packers – the effort of connection tracking means setting up the firewall to allow ongoing (ie established) traffic to pass in.

    But these are the happy cases. In the event of a network issue that affects an HTTP payload sent from your reverse proxy toward the requesting client, a mid-way router will send back to your machine an ICMP packet describing the problem. If your firewall is not configured to let all ICMP packets through, then the only way in would be if conntrack looks up the connection details from its table and allows the ICMP packet in, as “related” traffic. This is not dissimilar to the FTP case above, but rather than a different port number, it’s an entirely different protocol.

    And then there’s UDP tracking, which is relevant to QUIC. For hosting a service, UDP is connectionless and so for any inbound packet we received on port XYZ, conntrack will permit an outbound packet on port XYZ. But that’s redundant since we presumably had to explicitly allow inbound port XYZ to expose the service. But in the opposite case, where we want to access UDP resources on the network, then an outbound packet to port ABC means conntrack will keep an entry to permit an inbound packet on port ABC. If you are doing lots of DNS lookups (typically using UDP), then that alone could swamp the con track table: https://kb.isc.org/docs/aa-01183

    It may behoove you to first look at what’s filling conntrack’s table, before looking to disable it outright. It may be possible to specifically skip connection tracking for anything already explicitly permitted through the firewall (eg 80/443). Or if the issue is due to numerous DNS resolution requests from trying to look up spam sources IPs, then perhaps either the logs should not do a synchronous DNS lookup, or you can also skip connection tracking for DNS.


  • https://github.com/Overv/vramfs

    Oh, it’s a user space (FUSE) driver. I was rather hoping it was an out-of-tree Linux kernel driver, since using FUSE will: 1) always pass back to userspace, which costs performance, and 2) destroys any possibility of DMA-enabled memory operations (DPDK is a possible exception). I suppose if the only objective was to store files in VRAM, this does technically meet that, but it’s leaving quite a lot on the table, IMO.

    If this were a kernel module, the filesystem performance would presumably improve, limited by how the VRAM is exposed by OpenCL (ie very fast if it’s just all mapped into PCIe). And if it was basically offering VRAM as PCIe memory, then this potentially means the VRAM can be used for certain RAM niche cases, like hugepages: some applications need large quantities of memory, plus a guarantee that it won’t be evicted from RAM, and whose physical addresses can be resolved from userspace (eg DPDK, high-performance compute). If such a driver could offer special hugepages which are backed by VRAM, then those application could benefit.

    And at that point, on systems where the PCIe address space is unified with the system address space (eg x86), then it’s entirely plausible to use VRAM as if it were hot-insertable memory, because both RAM and VRAM would occupy known regions within the system memory address space, and the existing MMU would control which processes can access what parts of PCIe-mapped-VRAM.

    Is it worth re-engineering the Linux kernel memory subsystem to support RAM over PCIe? Uh, who knows. Though I’ve always like the thought of DDR on PCIe cards. All technologies are doomed to reinvent PCIe, I think, said someone from Level1Techs.



  • For my own networks, I’ve been using IPv6 subnets for years now, and have NAT64 translation for when they need to access Legacy IP (aka IPv4) resources on the public Internet.

    Between your two options, I’m more inclined to recommend the second solution, because although it requires renumbering existing containers to the new subnet, you would still have one subnet for all your containers, but it’s bigger now. Whereas the first solution would either: A) preclude containers on the first bridge from directly talking to containers on the second bridge, or B) you would have to enable some sort of awful NAT44 translation to make the two work together.

    So if IPv6 and its massive, essentially-unlimited ULA subnets are not an option, then I’d still go with the second solution, which is a bigger-but-still-singular subnet.


  • For a link of 5.5 km and with clear LoS, I would reach for 802.11 WiFi, since the range of 802.11ah HaLow wouldn’t necessarily be needed. For reference, many WISPs use Ubiquiti 5 GHz point-to-point APs for their backhaul links for much further distances.

    The question would be what your RF link conditions look like, whether 5 GHz is clear in your environment, and what sort of worst-case bandwidth you can accept. With a clear Fresnel zone, you could probably be pushing something like 50 Mbps symmetrical, if properly aimed and configured.

    Ubiquiti’s website has a neat tool for roughly calculating terrain and RF losses.





  • Tbf, can’t the other party mess it up with signal too?

    Yes, but this is where threat modeling comes into play. Grossly simplified, developing a threat model means to assess what sort of attackers you reasonably expect to make an attempt on you. For some people, their greatest concern is their conservative parents finding out that they’re on birth control. For others, they might be a journalist trying to maintain confidentiality of an informant from a rogue sheriff’s department in rural America. Yet others face the risk of a nation-state’s intelligence service trying to find their location while in exile.

    For each of these users, they have different potential attackers. And Signal is well suited for the first two, and only alright against the third. After all, if the CIA or Mossad is following someone around IRL, there are other ways to crack their communications.

    What Signal specifically offers is confidentiality in transit, meaning that all ISPs, WiFi networks, CDNs, VPNs, script skiddies with Wireshark, and network admins in the path of a Signal convo cannot see the contents of those messages.

    Can the messages be captured at the endpoints? Yes! Someone could be standing right behind you, taking photos of your screen. Can the size or metadata of each message reveal the type of message (eg text, photo, video)? Yes, but that’s akin to feeling the shape of an envelope. Only through additional context can the contents be known (eg a parcel in the shape of a guitar case).

    Signal also benefits from the network effect, because someone trying to get away from an abusive SO has plausible deniability if they download Signal on their phone (“all my friends are on Signal” or “the doctor said it’s more secure than email”). Or a whistleblower can send a message to a journalist that included their Signal username in a printed newspaper. The best place to hide a tree is in a forest. We protect us.

    My main issue for signal is (mostly iPhone users) download it “just for protests” (ffs) and then delete it, but don’t relinquish their acct, so when I text them using signal it dies in limbo as they either deleted the app or never check it and don’t allow notifs

    Alas, this is an issue with all messaging apps, if people delete the app without closing their account. I’m not sure if there’s anything Signal can do about this, but the base guarantees still hold: either the message is securely delivered to their app, or it never gets seen. But the confidentiality should always be maintained.

    I’m glossing over a lot of cryptographic guarantees, but for one-to-one or small-group private messaging, Signal is the best mainstream app at the moment. For secure group messaging, like organizing hundreds of people for a protest, that is still up for grabs, because even if an app was 100% secure, any one of those persons can leak the message to an attacker. More participants means more potential for leaks.



  • Let me make sure I understand everything correctly. You have an OpenWRT router which terminates a Wireguard tunnel, which your phone will connect to from somewhere on the Internet. When the Wireguard tunnel lands within the router in the new subnet 192.168.2 0/24, you have iptable rules that will:

    • Reject all packets on the INPUT chain (from subnet to OpenWRT)
    • Reject all packets on the OUTPUT chain (from OpenWRT to subnet)
    • Route packets from phone to service on TCP port 8080, on the FORWARD chain
    • Allow established connections, on the FORWARD chain
    • Reject all other packets on the FORWARD chain

    So far, this seems alright. But where does the service run? Is it on your LAN subnet or the isolated 192.168.2.0/24 subnet? The diagram you included suggests that the service runs on an existing machine on your LAN, so that would imply that the router must also do address translation from the isolated subnet to your LAN subnet.

    That’s doable, but ideally the service would be homed onto the isolated subnet. But perhaps I misunderstood part of the configuration.


  • im not much of a writer, im sure its more clear from AI than if i did it myself

    Please understand this in the kindest possible way: if you were not willing to write documentation yourself, why should I want to want review it? I too could use an AI/LLM to distill documentation rather than posting this comment but I choose not to, because I believe that open discussion is a central tenant of open-source software. Even if you are not great at writing in technical English, any attempt at all will be more germane to your intentions and objectives than what an LLM generate. You would have had to first describe your intentions and objectives to the LLM anyway. Might as well get real-life practice at writing.

    It’s not that AI and LLMs can’t find their way into the software development process, but the question is to what end: using an AI system to give the appearance of a fully-flushed out project when it isn’t, that is deceitful. Using an AI system to learn, develop, and revise the codebase, to the point that you yourself can adequately teach someone else how it works, that is divine.

    With that out of the way, we can talk about the high-level merits of your approach.

    how the authentication works: https://positive-intentions.com/docs/research/authentication

    What is the lifetime of each user’s public/private keypair? What is the lifetime of the symmetric key shared between two communicating users? The former is important because people can and do lose their private key, or have a need to intentionally destroy the key. In such instance, does the browser app explicitly invalidate a key and inform the counterparty? Or do keys silently disappear and also take the message history with it?

    The latter is important because the longer a symmetric key is used, the more ciphertext that a malicious actor can store-and-decrypt later in time, possibly in the future when quantum computers can break today’s encryption. More pressing, though, is that a leak of the symmetric key means all prior and future messages are revealed, until the symmetric key is rotated.

    how security works: https://positive-intentions.com/blog/security-privacy-authentication

    I take substantial notice whenever a promise of “true privacy” is made, because it either delivers a very strange definition of privacy, or relies upon the reader to supply their own definition of what privacy means to them. When privacy is on offer, I’m always inclined to ask: privacy from whom? From network taps? From other apps running in the same browser?

    This document pays only lip service to some sort of privacy notion, but not in any concrete terms. Instead, it spends a whole section on attempting to solve secure key exchange, but simply boils down to “user validates the hash they received through a secure medium”. If a secure medium existed, then secure key exchange would already be solved. If there isn’t one, using an “a priori” hash of the expected key is still vulnerable to hash attacks.

    this is my sideproject and im trying to get it off the ground

    I applaud you for undertaking an interesting project, but you also have to be aware that many others have also tried their hand at secure messaging, with more fails than successes. The blog posts of Soatok show us the fails within just the basic cryptography, and that doesn’t even get to some of the privacy issues that exist separately. For example, until Signal added support for username, it was mandatory to reveal one’s phone number to bootstrap the user’s identity. That has since been fixed, but they go into detail about why it wasn’t easy to arrive at the present solution.

    am i a cryptographer yet?

    I recall a recent post I saw on Mastodon, where someone who was implementing a cryptographic library made sure to clarify that they were a “cryptography engineer” and not a cryptographer, because they themselves have to consult with a cryptography regarding how the implementation would work. That is to say, they recognized that although they are writing the code which implements a cryptographic algorithm, the guarantees comes from the algorithm itself, which are understood by and discussed amongst cryptographers. Sometimes nicely, and other times necessarily very bluntly. Those examples come from this blog post.

    I myself am definitely not a cryptographer. But I can reference the distilled works of crypgraphers, such as from this 1999 post which still finds relevancy today:

    The point here is that, like medicine, cryptography is a science. It has a body of knowledge, and researchers are constantly improving that body of knowledge: designing new security methods, breaking existing security methods, building theoretical foundations, etc. Someone who obviously does not speak the language of cryptography is not conversant with the literature, and is much less likely to have invented something good. It’s as if your doctor started talking about “energy waves and healing vibrations.” You’d worry.

    I wish you the very best with this endeavor, but also caution as the space is vast and the pitfalls are manifold.


  • Aiming to create the worlds most secure messaging app

    For anyone else that was looking for it, this is the link to the threat model: https://positive-intentions.com/docs/research/threat-model/

    That said, it seems quite thin on hard details, such as how identities (ie usernames) are managed – eg are they unique? How can users cross-check an online identity to a real person? Fingerprints? QR codes? SHA256 hashes? – and whether they are considered publicly-exchangeable. Plus how users are bootstrapped so they can find each other.

    While a threat model is the minimum to even beginning an assessment of anything that utters the word “security”, I do have to ask:

    • Was that document crafted for this project specifically?
    • Was it prepared by a cryptographer?
    • And was it generated using an AI/LLM?

  • litchralee@sh.itjust.workstoSelfhosted@lemmy.worldSelf hosting Signal server
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    1
    ·
    4 months ago

    This doesn’t answer OP’s question, but is more of a PSA for anyone that seeks to self-host the backend of an E2EE messaging app: only proceed if you’re willing and able to upkeep your end of the bargain to your users. In the case of Signal, the server cannot decrypt messages when they’re relayed. But this doesn’t mean we can totally ignore where the server is physically located, nor how users connect to it.

    As Soatok rightly wrote, the legal jurisdiction of the Signal servers is almost entirely irrelevant when the security model is premised on cryptographic keys that only the end devices have. But also:

    They [attackers] can surely learn metadata (message length, if padding isn’t used; time of transmission; sender/recipients). Metadata resistance isn’t a goal of any of the mainstream private messaging solutions, and generally builds atop the Tor network. This is why a threat model is important to the previous section.

    So if you’re going to be self-hosting from a country where superinjunctions exist or the right against unreasonable searches is being eroded, consider that well before an agent with a wiretap warrant demands that you attach a logger for “suspicious” IP addresses.

    If you do host your Signal server and it’s only accessible through Tor, this is certainly an improvement. But still, you must adequately inform your users about what they’re getting into, because even Tor is not fully resistant to deanonymization, and then by the very nature of using a non-standard Signal server, your users would be under immediate suspicion and subject to IRL side-channel attacks.

    I don’t disagree with the idea of wanting to self-host something which is presently centralized. But also recognize that the network effect with Signal is the same as with Tor: more people using it for mundane, everyday purposes provides “herd immunity” to the most vulnerable users. Best place to hide a tree is in a forest, after all.

    If you do proceed, don’t oversell what you cannot provide, and make sure your users are fully abreast of this arrangement and they fully consent. This is not targeted at OP, but anyone that hasn’t considered the things above needs to pause before proceeding.