As an engineer, the lack of consideration for the matter of how much one should corrupt a signal (which might be scanned under different conditions, such as different light levels and resolutions) is grating to me.
It’s like scratching a design onto the bottom of an audio CD, playing it, and if it works on your CD player, shipping it. “Works for me”
I feel like you need both people. The people who scratch a unique song onto a CD that only plays on one random CD player 2% of the time, and then the people who optimize that one song to play on 100% of devices while your upside down and underwater.
Make the cool QR code first, then get it to work everywhere when you actually made something cool.
Look at it as if it's using unused parts of the signal spectrum to fit in another signal instead. Sometimes less error resilience in exchange for more data is a worth while trade, see for instance 256-QAM compared to 4-QAM (although not quite the same, I admit).
I get what you mean, it's a misuse of the underlying technology and a crude hack in some ways, but if it's stupid and it works it's not stupid.
This can barely be recognized by a layperson as a QR code at all.
This may be great for steganography — like these US POWs in Vietnam blinking Morse code for "torture" while being filmed, when their captors don't realize that and pass the video, and the morbid message is later extracted.
Does it need to look cool more than it needs to work instantly? (like an ad)
Also, I imagine most people use like 3 QR code readers, snapchat, the camera app for ios/android. Seems pretty trivial to test for 80% of the population.
It’s like scratching a design onto the bottom of an audio CD, playing it, and if it works on your CD player, shipping it. “Works for me”