• 1 Post
  • 35 Comments
Joined 1 year ago
cake
Cake day: July 24th, 2023

help-circle
  • I haven’t used tailscale to know how well it works but as a current zerotier user I’ve been considering moving away from it.

    I actually love the idea and it’s super simple to set up but has some very annoying pitfalls for me:

    1. It’s a lot of “magic”. When it fails to work the zerotier software gives you very little information on why.
    2. The NAT tunneling can be iffy. I had it fail to work in some public WiFis, occasionally failed to work on mobile internet (same phone and network when it otherwise works). Restarting the app, reconnecting and so on can often help but it’s not super reliable IMO.
    3. Just recently I’ve had to uninstall the app restart my Mac, reinstall the app to get it to work again - there were no changes that made it stop, it just decided it’s had enough one day to the next and as in point 1, it doesn’t tell you much over whether it’s connected or not.

    Pretty much all of the issues I’ve had were with devices that have to disconnect and re-connect from the network and/or devices that move between different networks (like laptop, phone). On my router, it’s been super stable. Point is, your mileage may vary - it’s worth trying but there are definitely issues.


  • Would you accept a certificate issued by AWS (Amazon)? Or GCP (Google)? Or azure (Microsoft)? Do you visit websites behind cloudflare with CF issued certs? Because all 4 of those certificates are free. There is no identity validation for signing up for any of them really past having access to some payment form (and I don’t even think all of them do even that). And you could argue between those 4 companies it’s about 80-90% of the traffic on the internet these days.

    Paid vs free is not a reliable comparison for trust. If anything, non-automated processes where a random engineer just gets the new cert and then hopefully remembers to delete it has a number of risk factors that doesn’t exist with LE (or other ACME supporting providers).


  • Not OP but some stores have these hyper-sensitive scales you put your bag/scanned items on. They can be super annoying as tiny differences in the weight will lock up the entire thing and you need someone to unlock it again. E.g. if you didn’t start with all your bags already on it and you try to add a new bag. Or the area is full and you want to remove and already full bag. Or you nudged something with your leg while scanning the next item.




  • I have no experience with this, but happened to have seen an interview with Ludwig Minelli, the founder of Dignitas (an organisation for assisted death). The man is 90+ and still fighting for this right. I believe I saw it in a video format, but I think this was the interview - I think it’s worth a read.

    I’d suggest you look up the contact for the various organisations and reach out with your situation and questions to see what they say. They’re likely to be much better sources of information.


  • I get the convenience part so the staff doesn’t have to go around do it by hand, but it just seems infeasible to do it for the other examples mentioned.

    E.g. you go in, pick up item listed for $10, finish shopping in 20 mins, item now costs $15 at till… probably leave it (so now the staff has to re-shelf it) and start shopping at a place that is not trying to scam you.

    For the other example, if there are a few packs of something expiring and they reduce the price for all the items on the shelf, everyone will just take the ones which have a reasonable shelf life left leaving the expiring ones.

    Both of these just seem stupid.




  • I wonder if this will also have a reverse tail end effect.

    Company uses AI (with devs) to produce a large amount of code -> code is in prod for a few years with incremental changes -> dev roles rotate or get further reduced over time -> company now needs to modernize and change very large legacy codebase that nobody really understands well enough to even feed it Into the AI -> now hiring more devs than before to figure out how to manage a legacy codebase 5-10x the size of what the team could realistically handle.

    Writing greenfield code is relatively easy, maintaining it over years and keeping it up to date and well understood while twisting it for all new requirements - now that’s hard.


  • I have never seen contributors get anything for open source contributions.

    In larger, more established projects, they explicitly make you sign an agreement that your contributions are theirs for free (in the form of a github bot that tells you this when you open a PR). Sometimes you get as much as being mentioned in a readme or changelog, but that’s pretty much it.

    I’m sure there may be some examples of the opposite, I just… Wouldn’t hold my breath for it in general.


  • I think I misunderstood your problem, I assumed the issue was the volume mounts and after testing it I was indeed wrong - the docker cli now accepts relative paths so your original command does the same as what I suggested. After re-reading your issue I have a different idea of what’s wrong, but would have to see your dockerfile (or for you to confirm) to be sure.

    Do you add 10f.py to the docker image when you build it and do you specify the command/entrypoint in the Dockerfile? There are possibly to issues I can think of with how you do that (although considering the docker compose works it’s probably the 2nd):

    1. You do add it and you add it to /data in the image - when you mount a volume over it would make the script no longer exist in the container.
    2. You do add it and it’s not in /data - in this case the issue with running docker run -v ./:/data -w /workdir tenfigers_10f:v1 10f.py is the last bit - you override the command which makes it try to look for it at /data/10f.py, if you omit it the last part (10f.py) it should run whatever the original command was and assuming you set the cmd/entrypoint correctly in the Dockerfile it should see /data as ./ in python.

    (Also when you run it with the CLI you might want to add -it --rm as well to the docker command otherwise it won’t really behave similarly to a regular command)


  • It works in docker compose because compose handles relative paths for the volumes, the docker CLI doesn’t.

    You can achieve this by doing something like

    docker run -v $(pwd):/data ...
    

    pwd is a command that returns the current path as an absolute path, you can just run it by itself to see this. $() syntax is to execute the inner command separately before the shell runs the rest of it. (Same as backticks, just better practice)

    I imagine that wouldn’t work on windows, but it would on either osx, Linux or wsl.

    Generally speaking, if you need the file system access and your CLI requires some setup, I’d recommend either writing it in a statically compiled language (e.g. golang, rust) or researching how to compile a python script into an executable.

    If you’re just mounting your script in the container - you’re better off adding it directly at build time.


  • Haven’t had any experience with eweka, but this is the reason why people tend to have multiple providers from different backbones and multiple indexers - to increase your chance for completion. Weirdly, eweka does not follow DMCA, but NTD which I’ve seen regarded as slower to take down content, so in theory the experience should be better, especially on fresh content.

    Your mileage will vary greatly depending on what indexers/providers you pick and unfortunately it’s very difficult to say whether it will reach your expectations until you try different options.

    If you’re willing to spend some more on it, you could try just looking for a small and cheap block account from a different backbone to see if it helps with the missing articles, but there are no guarantees.


  • Very difficult to predict the future, but my bet would be on no (to the in 20years question).

    I doubt the hardware would last 20 years and eventually it’ll become hard to source parts as the popularity falls off, even if you could repair it yourself. I’m sure anything with an online dependency will not work either, but offline games have a chance.

    But the real question is would you want to use the switch in 20 years (or honestly, even today)? There is already a better alternative (steam deck) with a much more open platform with way more capabilities and I believe it can already emulate Nintendo games (although no first hand experience with that)

    I have a switch myself and would never recommend it to anyone personally.




  • Your isp can most likely tell which VPN you’re using (unless you also use tor, and even then there’s the theories that a lot of it is ran by law enforcement… depends on how paranoid you are), they will still see the quantity of traffic coming from your home to the VPN and vice versa. All they need to do is to check the IP and they’ll likely find it’s in use by … VPN service.

    As long as using a VPN is not illegal in your country you can pay for it however you want really (in some places paying with crypto may make it more suspicious than if you just paid for it through PayPal), if law enforcement really wanted to find out the VPN service you use they probably could, the payment would only make it a tiny bit easier.

    The key point as mentioned multiple times is to use one you trust, there’s no objectively best one, but you’ll find a lot of objectively bad ones (for privacy) if you research them. As a start just never use any which are sponsoring YouTube videos or blog articles, pretty much all of those are crap.


  • VPNs usually route your DNS through them as well, sometimes to other DNS servers but sometimes they just send them to your original DNS server but through the VPN, kinda up to your VPN config - all of the vpn services I’ve used to date did this, although they were all reputable ones. I’d not recommend to use a questionable VPN though.

    Dnssec only verifies authenticity of the server and the integrity of the data, so it helps to prevent man-in-the-middle of DNS, it doesn’t provide privacy. Look into DNS over Https (DoH) instead. It provides e2e encryption for your DNS traffic which achieves what dnssec does, but also gives you privacy. DNS over TLS (DoT) also does this, but it runs on a different port so it’s easier to block (e.g. if your isp decided they don’t like private DNS), while with DoH your DNS traffic looks the same as other web traffic - and afaik it can’t be blocked. As above, it’s likely this is not needed for use with a VPN, but I’d recommend looking into in general for use even when not on the VPN. Things like controld or nextdns can give you even more peace of mind (although read up on their policies for yourself)