Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Windows 0day privilege escalation still not fixed (chromium.org)
410 points by zaltekk on Dec 23, 2020 | hide | past | favorite | 192 comments


I'm dumbfounded why Microsoft can't fix this, it's essentially just a parameter validation issue. They must have some ghoulish software actually relying on the broken behavior.

Add to that their recklessly incompetent initial fix:

https://twitter.com/maddiestone/status/1341781306766573568


Having gone from sole developer at a tiny company to a developer at Microsoft (and now back again), I'd like to propose a corollary to Hanlon's Razor: Never attribute to either malice or incompetence what can be explained by complexity that you don't know about.


Isn't allowing complexity to grow to such an extent that you cannot fix critical bugs in a reasonable amount of time a form of incompetence?


In my view, it is a form of incompetence so common and, in practice essential for growth as a business that it has become a ubiquitous baseline against which we should strive to differentiate ourselves from.

It is complexity that is accepted in small amounts across dozens of projects over many years. Projects that add real value to the business while contributing to this invisible debt that from the business point of view, matters only once it has started directly affecting your revenue.

Eventually an organization can't scale without a major refactoring, rewrite, or an entirely new market vertical.


It's simply not economically viable to build software at the scale of windows without such issues. There are thousands of teams (not people) working on it and hundreds of thousands of people over the lifetime of the software. Even companies like Microsoft have pressure to deliver on time and be frugal, not just startups.

Incompetence is to spend too much time making software into a golden, egg laying donkey. As is building software that becomes impossible to change after a couple of years. Finding the fine line in between these extremes is what professional software engineering is all about.


Which is why there is a bussiness cycle, where every once in a while a startup will wipe the floor with a big inflexible competitor despite not having even a thousandth of the resources.


Or just end being bought by the competitor and turned into yet another of their departments.


That's organizational incompetence not technical one however.

IMO there is no good fix for that past certain company size.


It's impossible to have software as massive as Windows with 30+ years of backward compatibility that is not extremely complex.


Your comment is about 20-25 years too late.

Most of these problems are likely contributed to legacy code and backwards compatibility.


You’re talking about Microsoft.

It takes me a couple of lines of code now to spin up a web server than can handle thousands of connections. Or an app with beautiful fluid UI.

But I remember that I am standing on the shoulders of giants. What seems like zero complexity to someone on the shoulder is actually not.

Sometimes the giants fuck up too. Ultimately it is incompetence. But not the kind that is attributable to laziness or stupidity. Engineers at Microsoft, I would assume are like engineers everywhere - no smarter, no dumber.

Just because I can build a Lego bridge perfectly, doesn’t make the guys who’ve built an imperfect actual bridge are incompetent. They’re just solving very different problems.


It sounds like what you are saying is that over the long term, there are no competent developers or organizations.

Or perhaps that due to survivorship bias, the evidence for incompetence that we see sticking around longest is also the most pathological.


I have a modification to Hanlon's Razor: anyone who invokes it is both.


Except the user of a commercial application does not need to know about the complexity. They just care if a security loophole is fixed.


This is very true. So often I hear people arguing that something surely is simple to do and obviously people are idiots for not being able to, all while having no idea whatsoever about the system to which they are referring.



Alt+tab has been thoroughly broken on Windows 10 20H2 for over two months now. It randomly switches between the second and third window. No fix in customer facing versions yet either.

They are slow and incompetent.


This may explain it:

https://www.wsj.com/articles/microsoft-diminishes-windows-ro...

> The company is breaking Windows in pieces. The platform technology, on which Microsoft’s partners build their own devices, apps and services, will now fall under Scott Guthrie, who runs the Azure business. Mr. Guthrie’s unit, called Cloud + AI Platform, will also include the company’s mixed-reality business, including Microsoft’s Hololens device, as well as its artificial-intelligence business.

Maybe someone with insider knowledge will comment, but it looks like Windows is far from being a priority for Microsoft.


Man I still can’t believe Azure is number 2 behind Amazon for cloud computing. When they first started their marketing push to developers years ago, which I remember was very aggressive and full of evangelism marketing which I disliked, I kind of blew them off as some mid tier or old school oddity.

But it really shows you how powerful their enterprise sales machine is and the legacy reach of existing programming languages/frameworks.

It’s always easy to underestimate Microsoft I guess. Ditto with Oracle and the like. From our view down in the startup world.

That said. Alt-tab not working is an embarrassment though. And I hope they really haven’t let their OS QA slip this badly in favour of some growth area or whatever.


The comparison of cloud platforms truly is heavily distorted at a startup. Most companies aren't comparing AWS and MS in a vacuum when presented with the decision. How much of the world's software runs on dotnet? Windows, Office, and Active Directory command their own predominant shares, not to mention other tools like CRM. MS earned a reputation for the battleship; relatively stable, plays nicely with their other products, LTS, backward compatibility, etc. Suppose the CTO for an insurance claims company is presented with the decision of migrating a legacy platform which already runs on a fat stack of MS products. They don't even need a salesperson to convince them Azure is the obvious first choice, because to them it's just another cannon on the battleship.

In my experience the tribal evangelism for AWS is... intense... and they've somehow convinced people to proselytize unpaid on their behalf. Having worked with both I'll occasionally mention Azure if only to revel at the spicy takes. Honestly though, I worked at that claims company I mentioned. Likewise the startup I work at today threw their hats in with AWS. Both were respectively good decisions, both bad in their own right. As ever, try to do everything and something is going to give.


> MS earned a reputation for the battleship; relatively stable, plays nicely with their other products, LTS, backward compatibility, etc.

Microsoft earned thst reputation with a lot of development and organizational practices that they've since abandoned. It may take people a while to notice, but today's Microsoft is not prioritizing stable software interfaces and backwards compatability.

It took a lot of testing to ensure existing software and hardware continued to work with new operating systems, and they're not doing as much testing anymore.


Microsoft had some of the best documentation in the world. People really don't understand how valuable that is, and how important. Microsoft in 2020 certainly doesn't understand. They produce vast reams of auto-generated "documentation" where the only text is the function names with spaces added between the words.


Reminds me of PowerShell. There's extensive, detailed and super-useful documentation for pretty much all the commandlets available... but it's not installed by default. Get-Help ... will happily tell you that you need to download help if you want to see any details beyond command signature. Who in their right mind though this is a good idea? Such documentation should be shipped with the default install.

I have it on the top of my mind because it bit me twice in recent month. I had to do some PS work on some VMs that didn't have Internet access (beyond RDP). Sure, I can Alt+Tab to a browser on my machine, but at this point, why even have Get-Help? Contrast that with Emacs experience, where everything is documented, and documentation is easily accessible, off-line, and by default.


> I'll occasionally mention Azure if only to revel at the spicy takes.

Well there are a lot of old school Microsoft haters. I used to know a bunch of Unix guys ~ten years ago whose unstated principle was that anything Microsoft did was unilaterally a bad idea (and everything Unix ever did was always the best possible way).


Microsoft is also mostly purely tech. Amazon and Google (Alphabet) are more pervasive and threatening to other industries.

For that reason, I'm not surprised. I've seen the decision come down to not wanting to give money to the other two many times. MS is in a great position there.


They're including managed services like office 365 in that number though.

Might be fair because aws includes their services as well, but I'm pretty sure aws main income is from ec2, while azure is business tooling like active directory, office etc


And they seem to be pushing customers very hard on moving from on-prem to cloud for Office and email stuff. I don't know if they're subsidising the cloud services for now, or what.


It's really obvious that Azure has much lower adoption than AWS. For example, I evaluated their new Front Door combo accelerator / CDN product recently. One page listed their customers, and there were about a half dozen total. Out of curiosity, I scanned the top 1000 domains and found none using it. I got the impression that I was one of the few people even evaluating it, let alone using it in production.

Despite just kicking the tyres on the thing, I found about half a dozen bugs or missing critical features. That's just shocking to me.

For comparison, CloudFront -- the most direct competitor -- is far ahead in features and is used by far more customers. It also works out of the box.

All of the other Azure services other than plain virtual machines give me the same impression of being a first adopter and one of only a handful of customers.


Those customers are probably using it in some of their services. I’ve worked with a number of enterprise companies who are moving to the cloud and most of them have never considered something like front door to route their traffic.

I’ve brought Front Door into their architecture for the services I was working on, but even if they decide it’s a great thing for the entire company to use, it will take them over a year to get security to approve it, and then multiple years to get it rolled out to all of their products.

With Azure’s core customer base being major enterprises, it’s not surprising that you had a hard time finding evidence.


They still have a big .NET following and they make it easier to use Azure via their toolsets. I feel like it was mildly obvious that they'd do okay.


Alt-Tab isn't broken for everyone. It's working on all three of the windows machines I regularly use.


Alt-tab is working for me. Meanwhile, the bug first seen in prerelease versions of the new shell on NT3.51 in 1995, where the taskbar won't hide, is still present in the latest Win10 insider preview.


Sounds like a feature, not a bug!


I may be wrong but I don't believe Microsoft even has a dedicated Windows division any more.


Well that and they got rid of their QA and test engineers so nothing is caught before it's sent out... you just can't rely on free beta testers for everything.


> You just can't rely on free beta testers for everything.

Linux distros seem to manage pretty well...?

Or is this "it's only bad if Microsoft do it"?


We pay RH rather a lot of money for the excellent testing and integration they do. (And alt-tab works, if you want it to.)

Or if you're trying to limit this to individual use, I'll grant you equivalence once Microsoft stops charging their beta testers and offers them the source.


Fedora and CentOS Stream is RHEL's upstream so it can be said as "beta test" from RHEL's perspective. (gamma?)


They all do upstream integration tests because nobody sane likes being caught by surprise.

Even I do it and I work for a company orders of magnitude smaller


That’s an interesting point. Which for-profit Linux distro is using you as an unpaid beta tester for their closed-source code?


Fair point. But 90% of Linux submissions are corporate, last I checked. Corporations (usually) do a lot of internal testing before submitting, and then maintainers have to review submissions. This is long before the public ("beta testers") has to deal with any bugs.

And that's only the kernel. Distributions and their package maintainers have their own quality controls, as do cross-distribution upstream developers. Public bug trackers (beta testers) are a complement to these. The division of labour in quality control of Linux systems is fine, diverse, and of variable effectiveness before beta testers come into the picture.


If I'm paying for it, which I do hell no. O also use Linux but I don't ost for it, and it's hobbyist / power user os and I can actually fix things there unlike windows.


Yep. For those that are seeking a temporary remedy, open Registry Editor, navigate to HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Explorer, create/modify REG_DWORD value named AltTabSettings and set its value to 1. Restart your PC (restarting the Shell alone is possible but will currently introduce more bugs).


Thank you! This is honestly the best winterveil gift I have received this year, very much appreciated!

When you say "temporary" remedy, is that implying there will be a real fix that obsoletes this, or this workaround will stop working at some point?


It's fixed in the next major release so hopefully we can restore the value back to 0 when it's pushed out in Q1 2021.


Awesome, thanks for the fix, and the super quick replies, have a great season. :-)


Oh god I thought I was the only one who had noticed this... it drives me mad every single day.


Sure it's not including Edge tabs in the alt tab screen? You can turn that off.


Me2. I tried three keyboards before concluding that wasn't the problem.


I rely heavily on alt+tab. I haven't noticed this. Can you explain a bit more?


It does exactly as described. Sometimes it will shift to the second window as intended. Often it will skip to the third window open instead, requiring one to continue cycling back to the second window.


You can Alt+Shift+Tab to go in reverse direction.


Or just release Tab while keeping Alt depressed and navigate the thumbnails with the arrow keys.


I think it only happens if you use Edge


Oh okay, I use FF. This must be why. I alt+tabbed 100+times and didn't experience the issue. Didn't edge tabs now recently show in alt+tab? I wonder if it's because they are surfing between two tabs and alt tab is showing the last tab they clicked on rather than the last window they had up.

FF uses CTRL+Tab to cycle through tabs like alt tab.


Same here, along with Win+Tab (the "exploded" window view) crashing the shell. Every time I have to boot into Windows, I'm even more disappointed by what has happened to what was once a great OS.


Ubuntu user here and very occasional Windows user too. Is the problem only with the GUI (same as liking or not Gnome Shell or having it crash) or is the problem with the "real" OS under the GUI?

Example 1 for the GUI: it drives me crazy that I can't resize the dialogs to edit the properties of a scheduled task. They were probably designed for 800x600 screens and they were a bad design back then (a text area please and join the lines.)

Example 2 for the core: a process keeps track of its parent but if the patent exits the process doesn't update the reference so you can end up with a reused process id in the child process data table. I run into that a couple of weeks ago.


I don't relly develop for Windows, so most of my problems are related to OS GUIs and bundled software. Like how for some reason, most of the bundled UWP programs stopped working on my laptop after an update or how Windows 10 still supports two audio "channels" (default and communication) which many apps respect but the output device switcher only switches the default one and not the communication one and these options aren't even available in the "new" (it's been like 5 years at this point!!) settings app so you have to use the old "control panel" one which has now been removed from the menu and sometimes even search and the only way to get to it is to search for something related ans when it opens switch the tab to what you need even though this is not some obscure feature but something that every mainstream communication program uses and most users eventually run into and everyone constantly bitches about yet there is still no fix or even acknowledgement from MSFT because they fired all their testers/QC and care less about their paying customers than even the meanest of OSS maintainers. </rant>

Regarding the more technical side, while I'm entirely unqualified to comment on kernels and low-level APIs, I actually like some NT/Windows design choices quite a lot more than Linux. Network transparency (\\shares) is a big one and in general the way external filesystems are handled (mounting makes no sense for desktop use - especially the way udisks or whatever does it). And while the way programs are stored and installed on Linux is a neat idea in theory, it all breaks the moment one app doesn't follow the standard and it imposes far too many restrictions that Windows doesn't suffer from.

Edit: sorry for the rant, I'm stupidly tired and have had a few drinks. Happy holidays!


Perhaps they are trying to avoid breaking 3rd party code.

I've spent quite a lot of time poking around in the print spooler and my gut feeling is it's probably riddled with issues like this.


I would agree I was baffled how basically windows will take anything from print drivers and ram it into the spooler.


Also I suspect nobody wants to work on it because who wants to do printing.


I actually think working on printing infrastructure might be fun. There is image processing, font rendering, etc. which are fun niches. And you can see something physically tangible from your work.

It's not going to increase in relevance or take over the world, but it is probably satisfying.


I strongly suspect it's a haphazard, under-tested mess of compat hacks to work with thousands of models of printers, with thousands of quirks and bugs that Microsoft has painfully worked around to take care of their customers.

As opposed to Apple, who just assumes you don't use printers older than 2 years and liberally breaks old printers and scanners.


App compat hacks can be a fun sport for the right kind of person. I was on the Windows team for a few years and I sometimes looked at those or saw posts about them on mailing lists. It can get interesting.


Printing was complicated in those days because a computer couldn't hold a bitmap of a printed page in memory and still do anything. Early PostScript printers were also much more powerful than the PCs that drove them, often with 32 bit RISC CPUs, dozens of megabytes of RAM and some even had SCSI disks.

Now that most computers can hold a page in their L2 caches, it's easy and much faster to render in-memory and send a bitmap over to the printer driver to let it print with whatever options it wants. All the driver needs is to accept a bitmap and print it as well as it can and you can safely ignore most of the printer's own PDL.

Most of the thorny issues in printing are with special-purpose printers - thermal, barcode, embedded text only, etc.

Apple uses CUPS and it should work fine with older printers. Last time I checked, CUPS could handle an HP Deskjet 500 (if you have the parallel port)


Fun story I heard from an aerospace engineer - they were running some heavy simulations and producing graphs.

He found the printer that was emitting the graphs was actually more powerful than the (very expensive) machines running the simulations, to the point that converting the simulation to postscript (or some variant thereof) and running it directly on the printer was some orders of magnitude faster.


Indeed. I've seen printers churn out beautiful fractals at a fraction of the time it took 68K Macs to do it.


> There is image processing, font rendering, etc.

And stump upon sb else code that you have no permission to modify


That must be some kind of purgatory where developers goes to slowly die inside.


I said it somewhat tongue in cheek because I have been working with printing technologies for my whole career.


I’ve been working on a printing related project at my job and it’s been awful. Printers are the worst.


In the 1980's with MS-DOS you started out printing without a print spooler.

When IBM chose the I/O ports for the PC, they used a parallel port compatible with early office printers from Centronics, but used a female DB-25 connector (previously seen mainly on external serial RS-232 modems) on the PC. The printer end of the cable had the Centronics type connector.

By that time Centronics and other pre-DOS printers were already issued with standard or optional serial RS-232 input, or parallel.

Serial printers most often still used a DB-25 like modems, and parallel the Centronics connector. The only bidirectional communication was logical handshaking which had dedicated conductors in the cable, serial only needed one conductor plus ground for the data and 2 or 3 more conductors if you were even using handshaking. Parallel uses 8 data conductors and about half a dozen logic lines.

The early IBM PC offered an optional serial RS-232 I/O card having a male DB-25 (9 pins is actually more than enough for serial but the DB-9 connectors didn't show up much until you got 2 COM ports on a single card) then once people started using a mouse they were serial DB-9 and built into the mainboard. This was the bidirectional port to communicate with modems and send to printers. Or anything else, the RS-232C standard specifies you must be able to short any of the conductors or connect them to anything within +/- 25VDC without damage, whether the hardware is powered on or not. Not everything meets the full standard.

OTOH, parallel connects directly to the bus and can be very sensitive to incorrect connections or plugging cables while powered. Often damaging motherboard chips for a while there if a stray DB-25 serial modem cable were plugged into the parallel port of a PC. Which might contain jumpers between pins in either or both RS-232 connector hoods to effect self-handshake.

The printer was conceptualized to be the one _peripheral_ that every office was going to surely have or the PC could not function like a typewriter, so the printer has actually always been part of the PC which you connect first before powering them up as one.

Anyway, DOS would only do one thing at a time, so for paperwork you would have to wait for the printer to finish before you could move on. Some printers were slower than others. Fast was not in the vocabulary for early-adopting offices.

Within a couple years at Radio Shack you could get a print spooler to come between the CPU and the printer. An external hardware print spooler of course.

What a time-saver! When you hit PRINT your data went into the spooler much faster than the printer could accept it, within a few seconds you had your command prompt back and the printer could finish the task on its own, however long it took.

Eventually significant buffers came inside the printers themselves, plus Windows became accepted in mainstream offices and it had a software spooler.

With printing handled within Windows at the actual DOS layer for the first decade while Windows was only a shell around whichever DOS you installed it over.


I remember writing a print spooler for the Apple II. It only worked with RS-2332 because the Centronics interface didn't support interupts.

I don't think parallel ports connected directly to the bus, there was always an interface chip like the Intel 8255.


You're right, just not as well isolated as RS-232.


How bad could it be? It's about assigning people to the task and if necessary increase the pay. If it's a bigger mess make it a priority and have your best people fix it asap. Printing and security are important.


From my knowledge of the horrors of printing and being the 'printer guy' for a Drafting department where we dealt with all kinds of interesting printers...

The scenario I'm theorizing, is that there's probably at least one major vendor who's drivers are dependent on this behavior, and Microsoft is trying to avoid the flack that could result from an update 'breaking usability'. Working around that behavior could be complicated. I feel like if this is the case, I know the vendor.


I assume it's bigcorp slowness, having to roll up all updates into patch batches, following release schedules, testing against all release trains, going through QA, etc. No accelerated way to push critical, but trivial software fixes.


Last year they pushed some "simple" fixes fast and broke quite a few older VB applications. That was quite a fun day at my office when some customers couldn't work anymore...


This is interesting do you have more details you could share or point to a link?



MS can and does issue out of schedule patches every now and then. This presumably doesn't meet the bar since it only is a local privesc.


Microsoft has patched issues fairly quickly in the past. This may be a “critical” issue, but I think they have even higher internal classifications which this doesn’t qualify for.


> 2020-12-03 Microsoft advises that due to issues identified in testing, the fix will now slip to January 2021.


> They must have some ghoulish software actually relying on the broken behavior.

Probably XKeyscore or Solarwinds Orion or something... Can't break that shit, it gets important work done!


There seems to be some confusion in the comments about the nature of this exploit.

This exploit is a kind of sandbox escape. It allow a low integrity process to run code as a medium (aka normal) integrity process. The exploit runs `splwow64.exe` which explorer "helpfully" auto elevates to medium integrity. The exploit then tricks `splwow64.exe` into running arbitrary code.

It does not grant admin privileges.

In practice this vulnerability isn't that scary on its own. Sandboxes don't (or shouldn't) rely solely on this form of mandatory access control for their security. However this attack could be used as part of a chain of exploits in ored to escape a sandbox.


I believe splwow64.exe is used for printing from WOW64 processes, to allow 64-bit printer drivers to be used.


> In May, Kaspersky (@oct0xor) discovered CVE-2020-0986 in Windows splwow64 was exploited itw as a 0day. Microsoft released a patch in June, but that patch didnt fix the vuln. After reporting that bad fix in Sept under a 90day deadline, it's still not fixed.

https://twitter.com/maddiestone/status/1341781305126612995



> 2020-12-03 Microsoft advises that due to issues identified in testing, the fix will now slip to January 2021.


It may seem pedantic, but since this vulnerability is publicly known since months back, and furthermore has been exploited in the wild (according to description in the target article), is it not per definition not a 0day.


The reason it is considered a 0day is because it is being exploited in the wild.

This wasn't discovered by a security researcher looking for holes. This was discovered by a virus scanning company that realized people were actively being attacked using this method.


It was a 0-day at that point. Right now it is a 90-day.


Only to Project Zero and Microsoft.

Everyone else has known about it for exactly 2 hours.


Days are counted by how long a vendor has known about a bug, not the general public.


Not exactly, historically days are counted by how long the editor has provided a patch fixing the bug, as in "you[the adminsys] had X many days to apply the bugfix".

0day means no patch is available, whether the vulnerability is known privately/publicly or not


If you (the public) learn of the exploit at the same time as the vendor, then it is still a 0-day. You can construct a definition where it is "a zero day to you, the sysadmin" but that would really make it difficult to pick a single day to measure from. For this reason the most useful definition would be to measure from the defender with the earliest knowledge, which would be the vendor.


This seems to be a phrase that has a matter of perspective. I always see people trying to nail down a meaning but it always seems to little effect in day to day discourse.

I’m a fan of letting context infer meaning. And letting certain words just naturally grow to whatever the culture wants it to. It’s always hard to fight back against it.

There’s a million examples of this on the internet where people try to be pedantic about slang or word usage. All that matters is “we know what you mean”. I like to assume enough people here know the real difference zero days vs existing vulnerabilities are. But in practice it matters less.


People have debated prescriptive vs descriptive linguistics for centuries, it will not be resolved here.

https://www.bartleby.com/73/2019.html


My understanding of the term is days are counted in the view of "the defender," which is more than Microsoft


Microsoft would always be the first to know of the defenders–any other defender would just tell them. It then makes sense to count from there, rather than have multiple counts for each level of people learning of the vulnerability.


I think your argument makes sense when the day counts are close to each other. There really isn't any difference between a 55-day and a 57-day, nor does it make sense to account for some sysadmin who took a vacation day.

I still think that your usage of "0-day" breaks down precisely in the case we're in currently, where the vulnerability has been exploited in the wild and Microsoft has known about it for some time, but there is not a patch available, and the general public (everyone who has to defend against the exploit) found out about it today.


It seems that 0d has since become a synonym by some for unpatched exploit.


The days can be reasonably counted since the release of a fix: 0-day would be an exploit for which there is no defense, and a n-day would be an (old) exploit patched since n-days.

The most valuable time of an exploit is not the time of discovery, but the time of resolution.


This isn’t a guessing game. Zero day has a very specific meaning. Just because something isn’t patched doesn’t mean it can’t be mitigated (i.e. disable the service) so no, patching is not the most important. Disclosure is.


By defense I meant patch or mitigation. When using the sysadmin hat, I don't care when the bug was discovered or disclosed, I only care about when was it fixed, or how can it be mitigated.

Sorry my reply came so late.


It feels l33t to appropriate the terminology of a professional -

like "We need to control the optics of the situation"

"I flashed my cellphone but it failed and now it's bricked."

but the unsophisticated public gets it wrong and now here we are, every recent unpatched exploit is now 0day


> I flashed my cellphone but it failed and now it's bricked.

It doesn't sound wrong to me, both "flash" and "brick" are correct in an appropriate context. It's not "updating the system" but "flashing" if the process uses some low-level recovery mode, and it would be "bricked" if can no longer be recovered by usual means.


Most of the time "flashing" a phone (presumably referring to androids) involves using the recovery, which is basically a stripped down version of android. In that sense it's not any lower level than booting off a USB drive to fix your computer.


According to your standard: to "flash" something, at least you need to use the bootloader itself, or possibly at a lower level? Well, calling the process of uploading a firmware image to an embedded device during early boot via U-Boot as "firmware flashing" is well established, so we can start from here... thus, uploading a new Android image in Android Recovery is not "flashing", but uploading a "recovery" image in Android bootloader is? Now, would you call firmware uploading via iOS's DFU mode "flashing" too? Or do you believe that the DFU mode is end-user accessible, thus not low-level enough? Then, would you accept that uploading the firmware to the baseband processor (which I believe uses its own EEPROM) via DFU "flashing"?

I guess the definition varies, it was what I meant by "an appropriate context".


>thus, uploading a new Android image in Android Recovery is not "flashing", but uploading a "recovery" image in Android bootloader is? Now, would you call firmware uploading via iOS's DFU mode "flashing" too? Or do you believe that the DFU mode is end-user accessible, thus not low-level enough? Then, would you accept that uploading the firmware to the baseband processor (which I believe uses its own EEPROM) via DFU "flashing"?

The difference is that the recovery is almost a full blown operating system. It can mount filesystems, has various shell utilities installed, and there's a user interface (through ADB and on-screen). This in contrast to fastboot which has noneof those things, and only allows you to flash/erase partitions with the help of a computer.


Fair enough.


What's wrong with the second example?


In my day, bricked meant recovery was impossible without dragging out a soldering iron. This gave way to needing to at least physically disassemble something to get to a jtag or similar port. If you can recover the device with user accessible buttons and and standard cables... it’s not bricked in my book.


To me it's not even a brick if a soldering iron is still useful.

And for a number of years I only thought a zero-day defect was something that had existed since that version of DOS or Windows was issued, regardless of when or whether it was discovered, patched, exploited or not.

Eventually I got the idea that people just don't want to count back that far.


It was an 0-day at one point in time though. Unless you're the one using it, an exploit is only ever an 0-day in the past.

An alternative title could include "actively used" or similar to maybe be more clear.


So then every exploit is a zero day?


Every one that is initially found and used by an attacker, up until it is detected.

The exploit was a 0-day at one point in time. Furthermore, I'd argue that the perspective of the one talking also matters. If Microsoft etc know about it, but haven't patched it or made anything public, it's definitely a 0-day if used against me, as I haven't had any opportunity to defend against it.


They all begin as a zero day.


Not true, some exploits are written by examining the holes fixed by a vendor security patch, then writing an exploit to target the systems that haven't been patched yet. Those are not zero-day exploits.


Usually we say vulnerabilities are 0-days instead of 0-day exploits.


Now using the concept of language, what distinction does that give you? What message does that convey to anyone better?

0-day versus "publicly disclosed unpatched vulnerability" doesnt help anyway


"0-day ... still not fixed" makes it sound like someone is expecting Microsoft to have created a patch for a new exploit with same day turnaround. And therefore what's the big deal that they haven't?

If you want to use the "day" framing, the appropriate headline is "90-day exploit still not fixed". The entire point is that it's an old exploit that is still unpatched, and not some new discovery.


The patch was released (less than) 0 days ago; that's the definition, no?


Not if the definition is ‘a known security issue with no patch’


Which it isn’t...


Which I thought was wrong, but even Wikipedia agrees with you:

"A zero-day (also known as 0-day) vulnerability is a computer-software vulnerability that is unknown to those who should be interested in mitigating the vulnerability (including the vendor of the target software)."

Personally I have always used it as "there is not mitigation / patch available". Thanks for pointing me in the proper direction.

Edit: I am old and stubborn, I still use 'hacker' as a compliment.


> 2020-12-03 Microsoft advises that due to issues identified in testing, the fix will now slip to January 2021.

> 2020-12-08 Meeting between MSRC and Project Zero leadership to determine details and discuss next steps. The 14-day grace period is unavailable as Microsoft do not plan to patch this issue before Jan 6 (next patch Tuesday is Jan 12).

> 2020-12-23 90 day deadline exceeded - derestricting issue.

Ouch. With xmas in the middle the grace period, I could see how this can be considered too strict on P0's part. Then, again, the initial bad fix surely harmed whatever trust there was between the parties.


It's being actively exploited, so frankly a 14 day grace is the best MS can hope for


Any grace period for actively exploited bugs is irresponsible. Stuff that the bad guys use needs to be public asap.


> The only difference between CVE-2020-0986 is that for CVE-2020-0986 the attacker sent a pointer and now the attacker sends an offset.

CVE-2020-0986 had been discovered in the wild in May. Microsoft claimed to have fixed it, so this was logged as a separate CVE, even though it's essentially the same bug (the fix can be trivially circumvented) and P0 has given it a new 90 day period, which has now ran out.

I wouldn't call it too strict, they had much more than 90 days to fix it properly.


Not being super familiar with Windows, is an escalation from "low privilege" to "medium privilege" actually concerning in practice?

(e.g., this be used for something like breaking out of a Chrome sandbox?)


In practice, no. Meaning, typical Windows environments (especially corporate AD) provide so many privesc vectors, that no pentester has ever been stopped by this security boundary. Same goes to AMSI bypasses, etc.


https://chromium.googlesource.com/chromium/src/+/master/docs...

>Integrity levels are available on Windows Vista and later versions. They don‘t define a security boundary in the strict sense, but they do provide a form of mandatory access control (MAC) and act as the basis of Microsoft’s Internet Explorer sandbox.

And yes, chrome uses it as a sandbox.


To be clear, Chrome uses it as part of a "defense-in-depth" strategy, but its sandbox does not rely on it. From your link:

> So, the integrity level is a bit redundant with the other measures, but it can be seen as an additional degree of defense-in-depth, and its use has no visible impact on performance or resource usage.


The print spooler runs under the local system account so you effectively get admin rights over the local machine. If it's a terminal server then you control the server.

Not sure about Chrome though.


This vulnerability is about `splwow64.exe`. It's started by the malware but explorer has it on a whitelist that automatically elevates it to medium integrity. It's not running as admin or system.


My mistake, you are correct.


just wondering... is there any defense normies like me can do? eg. turn some windows feature off?


It isn't exploitable remotely, so just don't run shady software.


[flagged]


That’s not a very helpful comment. Not everyone has a choice in what OS they use (especially if it’s at work)


At work, when windows is corporate policy, you do not need to care about exploits. It is literally other peoples' problem.


Its a problem for someone and knowing about any mitigation is helpful.


OK, yes, if you are the IT dept, you are on the hook. At least if you are the ones who picked windows. But maybe you didn't and strategically protested the directive to use windows that came from up above. Then again, you don't really have to care, not your problem...


It is your problem because IT’s job is to prevent this stuff from happening. It doesn’t matter if the order came down from above, you need to do what you can to mitigate damage.


There is a world of difference between "job" (try to do it properly) and "responsibility" (you are on the hook if things go wrong). If the order came from above and you pointed out the problems, it might still be your job. But not your responsibility.


You don't personally care so the rest of use should not care either? You think its someone else's problem, so hide the solution from everyone?


You buy support contracts and software from Microsoft so you don't have to care. If Microsoft fails like in this case, you just shouldn't give them money. In all cases, no need to ask anyone but Microsoft for a workaround or other info.


Why even bother reading anything on this site or commenting here when you can always just go to the source or manufacturer? Obviously, you have all of the answers anyway. Its clear no one here has anything to offer you. The rest of us however find value in understanding the experiences of others.


That's not a very helpful comment and highly subjective. Depending on their requirements and needs a different OS might not even be feasible.


hm... do you mean linux-based? can't... Korean banks have activeX + other crap requirements. (they even detect VMs in linux)

also, linux can't run apps like photoshop / adobe cc apps / etc

as for mac... I'm waiting for a M2 macbook pro 16 inch with RTX 3090 graphics for about $1500...


"also, linux can't run apps like photoshop / adobe cc apps / etc" - seem to run pretty well under Wine most of the time...


I haven't been able to get PS running in Wine since the 2017 CC release (and that required some hackery).

Are you aware of a way to get recent releases working aside from QEMU or KVM?


Run CC 2015?


Pihole with the right block list can prevent known malicious software from hitting its command and control endpoints.

They can always use DOH but you can block DOH domains via the pinhole as well.


It's pretty easy to hardcore IPs of doh resolvers and bypass pihole completely.


There’s considerable precedent for seeding IP lists or using stealthy tactics (e.g. imagine how it’d be trying to block something which searches Google or Twitter, hits a random ad network).


Fair enough. On the other hand it can also prevent users from stumbling upon malware distribution sites by both blocking them directly and secondly blocking advertisements that often link to malware.

All of this of course is part of defense in depth, multiple layers of incomplete protection is better than nothing at all.


Oh definitely, I'm not saying that there's _no_ benefit — the key point is the distinction between something which you control to something you don't. DNS filtering is good for clients you control but it's important to understand that you can't force malware to use it to avoid accidentally thinking that you're protected against other threats (which I've heard various times from people who should know better but weren't thinking about it carefully in-depth at the time).


isn't PiHole some kind of external firewall? that works 90% of the average-joe known botnets against a desktop PC, but it's not helpful for laptops / unknown-control endpoints. (or endpoints that are really good at hiding)


No, it’s a DNS server with blacklisting features. It can’t block traffic, it can only prevent some software from looking up addresses.


You can use PiHole or one of the many equivalents on a laptop or other location shifting device in a few ways:

1. Run it locally and have it configured to use a public name server as its source (if you run Windows/other there are not doubt native options that'll work this way too). Even if the network you connect to redirects requests to public DNS resolvers you'll still be going through your local filter. Though you'll need to set your machine to ignore DNS config via DHCP, and you'll have to point it at the local resolvers if the network simply blocks public DNS servers.

2. Run it in a VM or container, this would mean you can run PiHole specifically even if you are running Windows, and configure as above. Memory requirements are pretty low so unless you are using very low spec device it should fit.

3. If you have a hosted server (you can get a VPS big enough for PiHole for a few $/year) or a publicly addressable address at home, you can run a VPN and access it that way (assuming the network you are on does not block your VPN of choice of course). You don't have to run a VPN, but I'd not recommend running a publicly addressable DNS server. This will even work on phones depending on the OS there and the chosen VPN.

Of course these are not viable options for a lesser techie user.


PiHole is a network wide ad blocker that works at the DNS level. Basically you route all of your network's DNS requests through PiHole and it blocks any domains that are known ad/malware domains.


Why would you not just modify your hosts file on your machine? Do you really need a raspberry pi for this?


Sometimes you don't have access to the hosts file, like on an unrooted phone or a smart TV.


Aside from using IPs directly, modern malware often uses an algorithm to generate domain names for C&C communication. Good luck trying to use a domain whitelist on the modern internet because web developer seem to actively fight against such a concept as not using every domain they possibly can.


Now I can grant myself admin privileges on my work computer and remove all of the corporate shitware/spyware on it :)


This allows you to go from low integrity to medium. So it's a sandbox escape. It won't give you admin if you don't already have it.


Doesn't surprise me. I've been sitting on a WSL1 root escalation script that I found by accident. I messaged one of the WSL leads and he didn't seem to care about it.

It's been months. Microsoft and security don't exactly go together in my head.


Issues like these, the massive hack of US government, etc.

Taken together these things feel like the death knell of Wintel.


haven't got the approval to fix this from whichever agencies are benefiting by this .


with M$ everyday is 0day

[same applies to Goo only their troll army, yet, is larger. as long as it lasts ..]


What is the point of disclosing it if it is not fixed? I understand it is to put pressure and likeness, but doesn't it cause more harm than good?

Windows is very popular.


This is a foundational policy question in security research, and Project Zero gives a lot of detail on its aproach, e.g.

https://googleprojectzero.blogspot.com/p/vulnerability-discl... https://googleprojectzero.blogspot.com/2020/01/policy-and-di...


Thanks for sharing the links. I knew it was a policy, but never really looked more into it.


Historically, vendors often refused to allocate time to patch things for anywhere from months to years.

Leaving vulnerabilities in products for an extended period of time is a problem, and adding a deadline helps to ensure that important security issues actually do get triaged and addressed.

As a recent project zero blog post about their policy calls out (https://googleprojectzero.blogspot.com/2020/01/policy-and-di...) "We've seen some big improvements to how quickly vendors patch serious vulnerabilities, and now 97.7% of our vulnerability reports are fixed within our 90 day disclosure policy."

It sounds like it's working as intended. The only way you can make it actually work is to make sure it has some teeth though, hence you have to actually disclose when you say you will.

> doesn't it cause more harm than good?

Microsoft is harming its users by not fixing a security vulnerability. In this case, it's even more clear since there's "in the wild" exploits. Project zero's just helping to raise awareness of the harm microsoft's causing.


It is in the public's best interest to demand timely fix because you never really know whether bad actors know about it. A demand has no teeth, therefore you have to make a threat (fix in 90 days, or we disclose publicly). A threat is only good if you have a track record of delivering on it without exceptions. Therefore, it isn't an option to not disclose it at 90 days.


A while ago, responsibly disclosed bugs took an extraordinarily long time to be patched. Disclosure deadlines ensure things are patched in a responsive manner. They only work though when the reporter actually follows through if the deadline is missed (and has standing & legal projection to execute like the Project Zero folks).


It's not even remotely exploitable, which from the perspective of the fact that the vast majority of computers are owned and used by a single-user, makes it a very low to zero risk.

The stakes are very different for disclosing a remotely exploitable vuln.


If you've ever try reporting vulnerabilities, you'll see that some companies won't ever fix the problem until it is widespread.


This is a feature, not a bug.


That's why I don't use Windows for work. It's not a system for professionals. As usual with Microsoft - smoke and mirrors and money is what matters the most.


I think that it’s jackassery by Google project 0 folks to publish proof of concept code when Microsoft is clearly trying to fix the issue. It puts millions of people at risk. Microsoft issued a patch; it didn’t fully fix the problem, ergo let’s be asses.


Check out this discussion on that issue here: https://news.ycombinator.com/item?id=25520299


You need to understand why this is done before calling it "jackassery".


I understand exactly why they do it. And it’s jackassery. If Microsoft had not done anything then I’d fully support responsible disclosure. But Microsoft did issue a patch which did fix the originally reported issue. It’s not clear that the second exploit path was called out in the initial project zero report. IMHO the fact that the vendor is responsive and working on the issue means that publishing exploit code puts millions of users at additional risk with no tangible security benefit. And yes, I am a career IT security professional. Project Zero is a good idea in general; in this case disclosure was irresponsible IMO.


The report timeline was listed in the link, with this new variant being reported on September 23rd.

Having disclosure deadlines is the only way to get away from companies taking literal years to provide patches, and the deadlines only matter if they're actually enforced.


They were given 3 months from when the patch was reported to be broken. Not the initial report.

So..


The patch appears to have been fairly trivial to work around.


#Google once again is being irresponsible by disclosing this vulnerability. Look at the timeline, there was a meeting between Google and Microsoft where Microsoft requested more time for the fix to roll out out. And Google decided to do irresponsible thing and just disclosing vuln, so now more and more attackers can use it instead of doing what's good for customers/users.

Google... Don't be evil??? Lol. So not true now...


Delayed disclosure is irresponsible. Doubly so if the vulnerability is the same as an old one. Triply so if there is active exploitation going on. So yes, Google is evil, for not doing immediate full public disclosure 90 days ago.


I disagree.


Google Project Zero is fairly firm about their timelines.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: