I disagree, at least on end-user devices as opposed to servers.
If you make it possible to defer updates indefinitely, users will. Guaranteed. Doesn't matter how urgent or critical the update is, how bad the bug or vulnerability it patches is, how disastrous the consequences may be: they'll never, ever voluntarily apply them.
If you're running a server, and willing to accept the risk of deferral because 1) you're in a better position to assess the risk and apply compensating controls than a regular user is, and 2) you're OK accepting the personal risk of having to explain to your boss why you kept deferring the urgent patch until after it blew up in your face, then yes, you should have a control to delay or disable it.
But end users? No. I use to believe otherwise, but now I've seen far, far too many cases where people train themselves to click "Delay 1 day" without even consciously seeing the dialog.
The real sin is combining security updates with feature updates. An argument can be made for enforced security updates(1). There is no good argument for forcing feature updates.
Most security-only updates have a low risk of interfering with with the user or causing instability. Most feature updates have a high risk of doing so.
(1) Although I think there should be some way of disabling even those, even if that way is hard to find and/or cumbersome to keep the regular users away.
The problem is that there's dozens of security updates every month, so even if you can skip feature updates, you'll have to reboot every Patch Tuesday anyway.
Even the Server Core edition, which has a much smaller "surface area" needs reboots almost every month.
Alright, I can buy that. Although from a dev POV I can also appreciate the not-fun of testing a combinatorial explosion of security updates vs features.
Basically, if I trust you (the dev/software maker/whatever) to not change UIs and add in bullshit, I'm okay having auto updates on. Unfortunately can't trust much now
> I disagree, at least on end-user devices as opposed to servers.
And who determines what is an "end-user device" vs a "server"?
> If you're running a server, and willing to accept the risk of deferral because 1) you're in a better position to assess the risk and apply compensating controls than a regular user is, and 2) you're OK accepting the personal risk of having to explain to your boss why you kept deferring the urgent patch until after it blew up in your face, then yes, you should have a control to delay or disable it.
So you do want choice after all it seems. Who do you think should make this choice on risk vs. workload/criticality?
I would say you actually agree with me mostly based on your comments, but you have not clarified _who_ makes these choices. I'm saying as the consumer, _I_ should get to make that choice. In the enterprise, my admin will make that choice via group policy, but I do not want Microsoft determining what I'm allowed to do with my OS. They are of course free to keep doing that, but then I also have the right to keep not buying their products.
No thanks. I should be able to use any copy of Windows for whatever use case I want. MS is free to disagree, and I am therefore free to keep not buying their products.
I'm the wrong person to ask about that. I've gone ages between Debian reboots while applying regular updates, and I'm not sure what it is about the Windows model that requires a reboot after patching a few things.
Fedora also wants to reboot to install (dnf) updates offline, as I understand it's to prevent potential instability from running processes getting confused when their files get swapped out under their feet.
It's also good since you can't swap out the kernel without rebooting.
I assume Microsoft took the same approach, just replace everything offline then reboot into a fully up-to-date system without any chance of things in RAM still being outdated.
If you make it possible to defer updates indefinitely, users will. Guaranteed. Doesn't matter how urgent or critical the update is, how bad the bug or vulnerability it patches is, how disastrous the consequences may be: they'll never, ever voluntarily apply them.
If you're running a server, and willing to accept the risk of deferral because 1) you're in a better position to assess the risk and apply compensating controls than a regular user is, and 2) you're OK accepting the personal risk of having to explain to your boss why you kept deferring the urgent patch until after it blew up in your face, then yes, you should have a control to delay or disable it.
But end users? No. I use to believe otherwise, but now I've seen far, far too many cases where people train themselves to click "Delay 1 day" without even consciously seeing the dialog.