Avatar

mea_rah

mea_rah@lemmy.world
Joined
0 posts • 222 comments
Direct message

Does this mean, that in order to keep the price down, you have to switch provider every year?

It’s even worse than that. The increase also happens during your (pretty commonly 2y) contract if you have that. So effectively you’re signing contract where you don’t really know how much you’re actually going to pay, but you’ll be fined if you decide to end the contract early. 🤯

But wait, there’s more!

For example Eir increases the base price while keeping the discounts the same. So in a typical bill you might have €100 base price for all the services and let’s say €40 is discount that ends up being €60 - which is what you see as your bundle price. After the price increase by - let’s say - 8% you’ll have €108 base price with €40 discount. So your bundle ends up being €68. In practice that means, that the “8%” price increase is actually 13%… If they decide to increase price by 10% it actually amounts to 16%, you get the idea.

But wait, there’s more!

The price increase is done yearly in a specific month. So it’s not after a year of your contract, but could as well be right after you sign a contract. So If we assume the “8%” (13%) increase right after contract start and then yet another for the second year of 2y contract, you’re at this stage paying €127 base - €40 discount = €87. So almost entire second half of the 2y contract is going to be 45% price increase compared to the advertised bundle price for which you initially signed up.

Obviously different bundles will lead to different prices, but the above is illustrative and IMO very realistic scenario.

Why is there not more public outcry about this, I have no idea. Perhaps in grand scheme of things and with ever increasing cost of living this is in absolute numbers not a huge monetary difference however scummy the practice might be.

Or is this a permanent increase across the board?

AFAIK this is not permanent, but the longer you stay with the company the more you’ll pay. It makes no sense to punish loyal customers, but here we are. You pretty much have to change companies or in some cases call them and sign new contract that will hopefully reset the price back to advertised amount. (until next increase) I’d advise finding provider that does not do this.

permalink
report
reply

I wonder to what extent are these S-300 systems there due to russia shelling civilian targets in Kherson almost daily.

The bridgehead protection might be just nice side bonus.

permalink
report
reply

This is backed by official update from Hanna Maliar, Ukraine’s Deputy Defence Minister. She tends to delay the updates until after couple days due to opsec and usually does not announce liberated areas until Ukraine has solid control over the area.

permalink
report
parent
reply

Is it possible that the files and templates could (should?) be part of separate role that you perhaps pull in as dependency?

The other pattern I saw out there was that the file/template wasn’t part of the roles, but was instead provided by playbook that is using the roles. So file was in repository with playbook and template/file path was passed to role as parameter.

permalink
report
reply

It’s impressive how valuable source Oryx has become considering how it’s essentially a work done by single individual.

permalink
report
parent
reply

Replace “nix flakes” with “Docker” and you have your answer from almost decade ago.

permalink
report
parent
reply

This is programmerhumor so perhaps allow for a bit of hyperbole on my part. I wasn’t completely factual.

However the initial days of Docker were effectively promising to solve the exact same “it works on my laptop” problem. The idea was that developer builds docker image and pushes it to repository where it can pass through CI and eventually the same image gets to production.

As you can see, this effectively reproduces the EXACT content as well, because you transfer the files in a set of tar files.

It didn’t work for many reasons. One of which is the fact that it’s often not so much about the exact files, but the rest of the environment like DBs, proxies, networking, etc that is the problem. I’ve seen image misbehaving in production due to different kernel version/configuration.

permalink
report
parent
reply

First of all. Thank you for civil discussion. As you say this is weird place to have such discussion, but it’s also true that these jokes often have some kernel of truth to them that makes these discussions happen organically.

So with that out of the way and with no bad intentions on my side:

I’ve noticed you use Dockerfiles and Docker Images interchangeably. And this might be the core of misunderstanding here. What I was describing is that:

  • Developer builds an image (using Dockerfile or otherwise) on their laptop and then pushes that image to a Docker repository.
  • This exact same image is then used in CI to do integration tests, scanning, whatever…
  • If all is good, this image is then deployed to production.

So if you compare sha of the image in production and on developers laptop, they are the same checksums. Files are identical. Nix arrives to this destination kind of from the other side. Arguably in more elegant way, but in both cases files are the same.

This was the promise (or one possibility) in the early days of Docker. Obviously there are some problems with this approach. Like what if CPU architecture of the laptop differs from production server? Well that wasn’t a problem back in 2014, because ARM servers just didn’t exist. (Not in any meaningful way) There’s also this disconnection between the code that generates the image and the image itself, that goes to production. How do you trust environment (laptop) where image is built. Etc… So it just didn’t stick as a deployment pattern.

Many of these things Nix solves. But in terms of “it works on my laptop” what I wrote in previous comment applies. The environment differences themselves rather than slightly different build artefacts is what’s frequently the problem. Nix is not going to solve the problem of slightly different databases because developer is runing MariaDB locally to test, but in production we use DB managed by AWS. Developer is not going to catch this quirky behavior of how his app responds to proxy, because they do not run AWS ELB on their laptop, but production is behind it. You get the idea.

When developer says it works okay on their laptop, what it usually means is the they do not have 100% copy of production locally (because obviously they don’t) and that as a result they didn’t encounter this specific failure mode.

Which is not to say, that Nix is bad idea. Nix is great. I’m just saying that there’s more to the “laptop problem” than just reproducible builds - we had those even before Docker Images.

Hope that makes sense. And again, thanks for civil discussion.

permalink
report
parent
reply

This project is using Home Assistant, but the ESPHome configuration is really simple, so perhaps you could adapt it to work without HA?

I’m sort of working on something similar but it’s not complete at all. The idea is that my doorbell will post message to MQTT where I have automation in place to snap a picture and post message to Matrix that someone’s at the door.

The esp32 devices have pretty limited HW, so you have to keep your expectations low if you don’t want to outsource the automation to some external system. You could however definitely do simple things like HTTP post on button press. Which is enough to send a message via some chat or push notification to your phone.

I have some blog post WIP around this that is specifically trying to avoid Home Assistant because there are a ton of tutorials out there for HA already.

permalink
report
reply

It kind of depends on what are your priorities. In my experience it’s usually much easier to upgrade to latest version from previous version, than to jump couple versions ahead, because you didn’t have time doing upgrades recently…

When you think about it, from the development point of view, the upgrade from previous to last version is the most tested path. The developers of the service probably did exactly this upgrade themselves. Many users probably did the same and reported bugs. When you’re upgrading from version released many months ago to the current stable, you might be the only one with such combination of versions. The devs are also much more likely to consider all the changes that were introduced between the latest versions.

If you encounter issue upgrading, how many people will experience the same problem with your specific versions combination? How likely are you to see issue on GitHub compared to a bunch of people that are always upgrading to latest?

Also moving between latest versions, there’s only limited set of changes to consider if you encounter issues. If you jumped 30 versions ahead, you might end up spending quite some time figuring out which version introduced the breaking change.

Also no matter how carefully you look at it, there’s always a chance that the upgrade fails and you’ll have to rollback. So if you don’t mind a little downtime, you can just let the automation do the job and at worst you’ll do the rollback from backup.

It’s also pretty good litmus test. If service regularly breaks when upgrading to latest without any good reason, perhaps it isn’t mature enough yet.

We’re obviously talking about home lab where time is sometimes limited, but some downtime usually not a problem.

permalink
report
parent
reply