It's really annoying we don't have a better solution for this. Even outside of open source, I don't want to spend over $600 up front before I sell a single copy of an app just to stop MS from blocking it. And that's not even mentioning companies like sectigo being terrible at their job. I've spent over a week going in circles with their support about verification: "your license shows address A", "no, the back shows the current address B, it's in the file I sent", "please send us a valid ID with address B", (repeat).
But unfortunately that's just a rant. I don't know if there even if a better solution. The money barrier (rather than verification) will stop some opportunistic malware, but big players won't care.
Even for our company, we would fork over the $600 but it looks like all of the EV cert options require a hardware signing key. Putting a human in the loop for our otherwise fully automated release process is a non-starter.
Worse still, the SafeNet software that my cert vendor recommends using (to interact with the hardware key) doesn't even allow use of Remote Desktop sessions!
It somehow detects if you're in an RDP session, and shows that there are no hardware tokens attached if that's the case. No message or warning whatsoever. My only Windows PC is headless and I lost several hours trying to debug this.
The entire EV cert process is such an outrage. My cert vendor advertised that the validation process would take 2-3 business days if all docs were in order, DUNS info correct, etc. I spent a lot of time ahead of the order ensuring the docs were indeed in order, and the process still inexplicably took 9 business days.
It's not about virtualisation. RDP sessions are actually marked as remote login sessions. The login source can be checked easily in each app. (or just run `net session`)
If TeamViewer acts on an already logged-in local session, it should work well.
Back in 2014 I was working at AltspaceVR (a social virtual reality startup) and we had Mac and Windows versions of the product. I set up a Mac Mini at the office to do the Mac builds, and it also ran a Windows VM under Parallels to do the Windows code signing. (The actual Windows builds ran in the cloud and we sent them down to the Windows VM for signing and then it sent them back up to the cloud.)
We had a Digicert code signing certificate that used a hardware key connected to the Windows VM. Unfortunately it required a password to be manually entered each time the code was signed.
To automate this, I wrote a little AutoHotkey script that watched for the password dialog and entered the password.
There wasn't any RDP issue because we didn't use RDP, just a Windows VM that didn't need any user intervention. (It could have been a separate physical machine, but since we had the Mini anyway and it had the capacity, it was convenient to have it do both the Mac builds and the Windows code signing.)
I sometimes think there are few problems that AutoHotkey cannot solve.
Also, Microsoft is working on a code signing service called Azure Code Signing where Microsoft issues and manages the certificate and keys and you simply upload binaries/app packages to Azure which does the signing.
That sounds like abuse of a monopoly position to me. They keep the horrendous status quo as bad as possible so their new product looks good by comparison.
Of course, there's a kinda reasonable reason for the hardware token requirement: Widely publicised 2010 virus 'StuxNet' had a driver signature, using a stolen copy of Realtek's driver signing certificate. [1]
And stolen certificates make the whole code-signing house of cards falls apart - you can't trust something signed by Realtek if it was not, in fact, signed by Realtek!
Of course, hardware tokens aren't a panacea: Some malware authors simply set up a shell company and get a certificate issued to that company.
One of my clients has strict requirements for an automated build process, and we managed to use an EV code signing cert on a YubiKey w/ PIN - so it’s definitely possible with a little leg work.
After having gone through it, I agree with other posts that the main annoyance is the verification process and weeks of delays/back-and-forth. That, and the inconvenience of now having a single point of failure in the build process (unless multiple certs are purchased).
Correct me if I'm wrong, but when a fully preconfigured YubiKey is shipped to you as part of the EV cert fulfillment, then there is no way to do this after-the-fact.
You need the key, but there are ways to get a .pfx out of it. Which I unfortunately don't remember, but that is probably documented by whoever you got the key from. And otherwise signtool can be used with the key, though it is not always trivial to get working.
It's not easy to spot malware, even if you have the source. For example Zoom can capture your screen, start applications, capture mic and camera, and allows remote control of your desktop. Why wouldn't it be blocked as malware even if you could automatically inspect the source?
Automated malware detection typically looks for behavior during installation rather than just the payload. (You can use the payload as a hint, though.) If an installer downloads a PNG and injects the last half of it into another process, and that drops an unsigned EXE into 'all users\startup' that can capture your screen, etc. you can probably block that without pissing too many people off. If you block SCREENCAP.EXE it's a different story.
That's old news. The race happens every day. AV companies upload samples to explode in a simulated environment, meanwhile malware authors started fighting back using `sleep(10000)` to avoid detection since that's longer than automation is worth running. The bad parts are not being executed until much later. Then the test environments started faking time speed ups.
I think there was a good episode about that in the Risky Business podcast.
We had to manage flagging problems at my company and even though we now sign our installers with a EV Certificate, anti-virus software have their own reputation database and still flag our work until a certain amount of users install it
And to be fair that is very much _not_ the entire point of the article.
"Essentially you have the option of two different kinds of certificates and three different price levels, depending on your urgency and user sensitivity."
That's too bad. I see why you're angry. Good luck!
FTA btw "Essentially you have the option of two different kinds of certificates and three different price levels, depending on your urgency and user sensitivity."
Maybe you need to change something about your application.
But unfortunately that's just a rant. I don't know if there even if a better solution. The money barrier (rather than verification) will stop some opportunistic malware, but big players won't care.