In case someone missed the story, Paul Graham is indirectly talking about the feedback received by Mighty App.
And in this particular case, I don't think this is a valid defense.
First, he clearly has too much skin in the game to be credibly neutral about it.
Second, he avoids addressing the main critique about this "new tech".
People are not claiming that it is a bad idea because it is infeasible or not valuable, but because it is dangerous and also because it sounds technically ridiculous. (thin client inside thin client)
> First, he clearly has too much skin in the game to be credibly neutral about it.
I think this is a general and important point (and sadly not at all discussed in his post). When an expert publicly says something that seems wrong, my default explanation is that they have a vested interest that consciously or unconsciously forces them to consider the implications of what they say and alter the message accordingly. Recent case in point: public health authorities telling the public that masks are useless and even harmful during the pandemic (presumably to avoid shortages) and then reversing their stance when masks became abundant.
This is especially true for public statements. If I were a friend of pg and we would go to a pub and he would not stop talking about how awesome one of the startups he invested in are, that would be a strong signal for me. But if he shills for one of his investments on twitter and on his blog just like some influencer-investor would do, I don't find this especially strong evidence that said startup is revolutionary even though he is an undisputed expert on startups.
>>"Recent case in point: public health authorities telling the public that masks are useless and even harmful during the pandemic (presumably to avoid shortages) and then reversing their stance when masks became abundant."
I agree with your general point, but my interpretation of those events has additional axis:
1. Masks are in shortage, and may not protect you, and public doesn't know how to use them and will probably do more harm than help by reusing and handling and touching their masks
Then as time passed and we learned more
2. This is getting bad; even if masks don't save the person wearing them, if it helps others, we are at a point where we need all the help we can get. Please wear masks to help slow the overall spread.
While there's definitely part of the complex factoring of the recommendation that masks became more abundant, I feel initial message was "Crappy masks won't save you" and later message "Crappy masks won't save you, but may save others from yourself"
98% of people I talk to don't understand that surgical mask / cloth mask will do extremely poor job of protecting them; but may protect others. (it also adds an axis of complexity for those who don't want to wear masks because they feel they have the right not to protect themselves, because that's not what mainstream masks are for; it's not about wearing a helmet or not to protect your own head; it's about protecting heads of those around you)
> 98% of people I talk to don't understand that surgical mask / cloth mask will do extremely poor job of protecting them
This was the line for a while, but since we now know that it's an airborne infection that basically accumulates when you're in close proximity to someone infected until it reaches a critical point where it can grow faster than your body can fight it off, the mask is the only thing that's slowing it down.
Masks (and good ventilation) are about the only thing that protects you - social distancing means nothing because coronavirus isn't confined to the larger droplets we thought it was. Mask-wearing seems to result in less-serious infections even when you catch it because your initial infection was likely by less virus.
edit: I think there was an official reluctance to admit that it was completely airborne because the constant attempts to reopen businesses (like restaurants) would have been completely thwarted if social distancing (and constant surface sterilization) were meaningless.
And then, when they finally got around to actually applying some science to the question, the message became, "Surgical and (decent) cloth masks will protect others from yourself, and may also protect you."
I can't track down papers atm, but, on a more anecdotal level, there have been quite a few case studies of superspreader events where the people who were wearing masks were much less likely to contract the illness than people who weren't. Given the specific details at play, it's hard to explain how that could happen if cloth or surgical masks don't protect oneself as well.
The big problem here was that, early on, nobody knew exactly how the virus spread. So, in the interest of caution, they picked the worst case scenario, aerosol transmission, and speculated based on that assumption. And a cloth or surgical mask probably won't protect the wearer very well in that case. But it turns out that droplet transmission seems to be the better model.
> The big problem here was that, early on, nobody knew exactly how the virus spread. So, in the interest of caution, they picked the worst case scenario, aerosol transmission, and speculated based on that assumption. And a cloth or surgical mask probably won't protect the wearer very well in that case. But it turns out that droplet transmission seems to be the better model.
So close but so far.
At a minimum please give https://www.thelancet.com/journals/lancet/article/PIIS0140-6... a read. The real story is that SARS-2 does spread through aerosols whereas droplet transmission is pure unproven dogma. The paper I linked goes into some reasons why that is the case (for ex the fact that transmission is more likely close to someone doesn’t actually provide strong evidence for droplet xmission)
EDIT: I should add that even if it were primarily droplet transmission - which I very much doubt - masking would likely still fail for the stated goal (source control) in a community setting due to improper usage. And improper usage doesn't just mean "the mask is below your nose", it means "you're not changing out the mask the moment it gets damp", "you're touching your mask with unwashed hands", "you're touching your hands with unwashed mask", "you're standing closer to your conversational partner to compensate for the fact that masks muffle hearing" (<--this last one isn't "improper usage" so much as an inevitable result but I digress). And this is all without discussing any of the various negatives of mask-wearing that it's become trendy to pretend literally don't exist.
But to your point, you already correctly hinted at the fact that if aerosol transmission is the dominant transmission mode, then masks don't work even in theory, let alone in practice.
I'm with you. To me, it was pretty clear why they were making recommendations in the way that they did. There was no secret about why they wanted to limit the supply of masks going to the public at the start of the pandemic. They didn't know for sure how it spread, but they did know that the most important places to have masks were the hospitals. You need to be extraordinarily cautious there because every hospital worker that gets sick reduces the number of people available to deal with patients (and adds another patient). There was also concern about face touching if people not used to masks started wearing them, but I only recall seeing that being a concern when it came to sending children to school with masks.
Once the mask supply was able to meet demand and after they were reasonably certain it would protect against the spread of the virus, they adjusted their recommendations accordingly.
That's sarcasm but let me try to address the notion behind it, to best of my limited personal understanding; "prevent spread" has multiple factors to it.
My impression, in my locality, is that at beginning of pandemic people focused on notion of using masks to protect themselves (and many though not all governments indicated that's not efficient/recommended).
Then "as time passed and we learned more" focus moved to using masks to protect others (though many individuals, in my circle, still aren't clear on that).
My impression is that we have evolving, and still not necessarily 100% certain evidence/understanding, on how it spreads and what are the most effective measures. It's made more complex because
a) there's no silver bullet; most measures increase your chances to some percentage. This makes discussion between experts and public more difficult as public tends to think in binary terms.
b) While yes there are many public health measures we've known for 100 years (wash hands, have clean water, cook/boil/heat things to sterilize them, sneeze in elbow/Kleenex, wear mask, remove waste, etc etc), not all are equally effective against all vectors. What seems "Common sense" / "Logical" to a layperson like myself, may be more nuanced to an expert with experience.
I mean, for what it's worth, I'm 100% certain my dad, a year in, is still worse off for using a mask because of how he uses it. Many and especially older people around me reuse their masks for days and weeks , touch them constantly, put them under their nose for prolonged periods, don't squeeze/tighten them sufficiently, etc. Even if all that touching doesn't hurt, their belief that they're protected coupled with incorrect usage coupled with likely increase in risky behaviour is a net negative.
People can scream liberty and freedoms and personal responsibility, but I feel public health officials have to look at cold hard facts, of both disease but also people's actual behaviour (as opposed to some ideal non-existent form) and how it actually affects spread rather than how it logically intuitively should.
My impression is that importance of mask is well-studied and masks are regarded as one of the most important hygienic achievements right behind washing hands. These are cold hard facts. Everything else is political wiggle-waggling.
You may argue whether it is okay for a technocratic establishment to lie to people and manipulate them; but to pretend that they didn’t feed the public with half-truths is just ridiculous.
I think we have wildly different understanding; and mine is very much evolving - as I feel is everybody else's.
Masks right behind washing hands - honestly, until Covid, I clearly lived in ignorance and had NO idea that a) Surgical masks are tested and designed to protect others against exhale and b) N95 masks are tested and designed to protect occupant via inhale. Just that basic, yet absolutely crucial concept was unknown to me. I had very limited exposure to them across two continents and half a dozen countries, and it certainly wasn't part of my own education about most important hygienic achievement or practice across four different educational / political / health systems. That doesn't necessarily mean masks aren't crucial, but it perhaps doesn't make it obviously so either in context of daily public health.
And then we have the mucky complex detail of aerosol vs droplet vs surface, sizing and staying power, etc.
You are presenting / clearly strongly believe that (all?) masks are a binary, always-good, always-clearly-protective thing.
My understanding/interpretation is that it's more complex than that; I am OK with scientists, when faced with a net new illness, inherently evolving the opinion and recommendations. I guess this is where you see a "technocractic establishment lying and manipulating". I look at wildly different countries with wildly different values, framework, goals, assumptions, methods, and yes "political wiggle-waggling", and most of those culturally politically different scientists following broadly similar path of learning and understanding.
It is likely our frameworks and life paradigms are too different to come to an agreement - at some point we all, you and I included, fit inbound facts and perceptions into our root approach and beliefs.
This domain is frustrating to talk about as a technical person because success isn't based on technical soundness, but rather popularity.
So any argument I could realistically make against $sillyAppThatHasVCBacking will not matter if enough Paul Grahams back it and $sillyApp makes itself a moat.
WeWork would be the most prominent example of this; even after their 2019/2020 crash, they'll probably come out of the pandemic okay compared to its competitors simply because it just got big enough.
I lost a decent amount of respect for pg due to how he keeps portraying Mighty like it's gonna change how we use the Internet and computers in general. Like, really?
I don't mind Mighty as a product, I don't mind their team, their pricing or their slick marketing website. But please, call it what it is: A nice and slick Remote Browsing product, one of multiple ones. Cloudflare recently launched an RBI product with much more humble and honest marketing about where it will be useful.
I was critical of Mighty on twitter, but not because I think it can't succeed. I don't want it to succeed. The whole concept is solving a problem that shouldn't need to be solved, ie running bloated web apps on commodity hardware. Mighty is essentially subsidizing bad software engineering practices and passing that cost onto the end consumer in the form of a monthly subscription service. I don't want to pay $30 a month so that Adobe can spend less on R&D optimizing their web apps.
Adobe could spend a billion dollars on R&D to optimize their web apps and still not get it. It's not an engineering problem, it's a corporate structure problem.
The only way to solve web app performance is to put 5-7 engineers in a room and say "we want it fast. As fast as possible. Get to work."
But no, they have the front end team, who has to run everything by the compliance committee, and the architecture committee, and the API committee, and the back-end team, that has to run everything by all the committees and the front-end team, and they also have to wait on the infrastructure team before they can test any of their stuff, and before you know it 12 weeks have gone by before an engineer has written an actual line of code.
And none of this corporate structure can be fixed because the CEO doesn't understand it, and all the management he hires to fix it are financially incentivized to continue holding their little fiefdoms.
I believe it will be a successful business by normal measures, but not live up to the hype and vision of its founders and investors. I think there will be market for a tool, especially with enterprises where people are forced to use a particularly slow web-app or need other isolation features.
But for other people? Can you imagine Adobe saying "Here is our product, now please purchase this third-party cloud service to be able to use it." They either improve their software or launch their own server-driven app to capture those $30. That's my take.
This idea of a dumb client is not new, it has been around basically forever, but we're seeing it again in multiple different incarnations because it could be huge.
And I mean that in a bad way.
Privacy and users rights are already pretty bad, but we still own our computers.
I can see how it would start in the corporate world.
If you have to manage 100s of devices, you prefer thin clients. As a user, you want control over your device and rich functionality.
Dumb clients usually give you dumb services.
I thought we have moved past thin clients and moved on to centrally managed software, either in the form of javascript web apps or as apps from an app store.
It's like they don't understand that (seemingly) forgotten archiectural principle of the internet which says that the "intelligence" goes in the ends: i.e. the leaves are smart, the nodes are as dumb as pipes.
Thus they insist on making the nodes (the servers) smart and the clients (leaves) dumb.
There's an ongoing tension here between software getting complex enough to be slow and people wanting to move computation to servers.
The problem with that, of course, is that the clients inevitably gets faster, and cheaper, and software gets pushed to the leaves again.
We're carrying around what used to be super computers.
This is also why I would be very cautious about investing in something that bets on offloading computation from clients: If you time it right, it could be huge for a while, but you're betting against advances in the speed and cost of computation. The same advances you'll rely on to scale and drive down costs.
As such it feels to me like they're playing a game of musical chairs.
1. Software is too heavy, let's push it on the server
2. Dumb client gets faster
3. Servers cost money, let's use the unused CPU cycles on the client
4. Bloatware happens, go back to 1.
I think you are leaving out the data story. It is very difficult to push data to the edges, you can easily run into volume or consistency problems.
No surprise that this means that there is no single solution. Complexity sometimes makes sense at the edges and sometimes make sense in the core. It just depends.
I think we have a lot to learn about building systems that give us the flexibility to move things around as needed. Plan 9 was interesting from that point of view because it gave a way for edge resources and core resources to be composed via 9P at any place in the network.
I don't buy that data is a technical problem. It's a business problem in as much as holding users data hostage is central to a lot of business plan. All of the data in my Google account, for example, fits on a microsd card. Of course I want backups of it, and so I don't want to actually store it on a single microsd card, but the point is that this doesn't require much logic at the core. You can push processing to the edge while using a storage service. But you can push storage services towards the edge too. For a lot of data we already employ near write-once for the core of the data, which is ideally suited for synchronisation schemes over fully connected centralised storage. Your e-mail, and things like Google Photos are good examples.
Consistency is less of a challenge than it might seem. It's a challenge if you're frequently disconnected from the network for extensive periods of time and might access and modify data from multiple disconnected devices in that period (note modify, not augment and create something new, which is easy to accommodate). But supporting that is very different from supporting mostly-local computation.
I think the "architectural principle of the internet" is not what you suggest.
Years ago, I pushed as much work onto the client as possible to reduce the workload on my servers. Now, people complain if you push too much work onto their phones. They want the server to do the heavy lifting so the app can be more responsive on their low-powered mobile client.
"The internet" doesn't define how much weight each end of a connection should bear. It doesn't really even dictate that there are only two ends.
Should a webapp have a braindead REST api with a select, insert, update, delete for each table of the underlying db model and have the client make 1000 nested calls to the server to render the simplest thing? Or should the server api be much more sophisticated so that a single api call can provide all the information required for the render?
There's no one right answer just because "internet".
It’s the first I’d heard of this axiom. Can you give an example of that in a real architecture?
I think about a central server with multiple terminals on a network. There the node is the beefy boi and the terminals are just I/O devices. Kind of similar to what MightyApp is doing - except with fewer privacy concerns :)
But no one uses that architecture anymore. To the extent that you have a terminal on a remote server, you use it to configure the server, from your own thick client.
Maybe this is a terrible stance but if the idea pisses off Jonathan Blow and Casey Muratori, I wouldn't take that as a bad sign. jblow and Casey are brilliant programmers and I mean them no disrespect, but their philosophies are certainly in the vein of prescriptive, "correct" ways of writing code. Namely you should write code that is fast and efficient. Unfortunately (or not, depends on your view), programmers do not like prescriptive, "correct" ways of writing code. Worse is better and all. If you give them a cheat code to let them use a little more performance, give them a little more headroom, they'll take it. For all the jblows and muratori's in the world, there's a lot more people who don't care about perf and just want to make cool stuff. For better or worse.
"For all the jblows and muratori's in the world, there's a lot more people who don't care about perf and just want to make cool stuff. For better or worse."
And that it's mostly people who don't care about this on deeper level are why we have such a bloated ecosystem. Layers of bad code stacked on layers of bad code.
If there were more jblows and muratori's in the world, Mighty would have no reason to exist. The web would already be performant. That PaulG is bullish on Mighty strikes me as a pessimistic view and a bet that the underlying problem won't be solved.
Yeah I mean I'm not saying it's a good thing that Mighty exists or that it might succeed. I'm just saying that we shouldn't take pithy Twitter replies as evidence it'll fail.
Obviously not the parent commenter, but I’ll give this a shot. I think that Jon and Casey’s (and those of like mind) are not commenting on Mighty as a piece of individual tech, but commenting on the fact that Mighty as a product could even be in a position to exist.
I am most likely to phrase the general position (which I think Jon and Casey would support) put forth as follows: the problem a service like Mighty is trying to solve ONLY exists because the standards and practices of modern software development is fundamentally broken.
You even touch on this in your GP post about devs taking the shortcuts to ‘making something cool’. As the parent post calls it, the cool stuff tribe are those developers who will use available cheat code because they expect that doing so is acceptable.
Your GP post seems to try and counter the Jon and Casey position by saying that because developers just want to make things that they will and there is no concern for any impacts these accumulating decisions may have. I think the ‘you’re missing the point’ comment derives from here. You seem to be saying that Jon and Casey’s position is not palatable to devs because those devs are not concerned with performance and don’t like to be told to consider such aspects of the things they make. But, J&C’s point is that if the standards for developing software were not so broken then the position of the cheat code using devs would be uniformly decried as substandard and unacceptable. In a world where development standards were in line with J&C’s views Mighty would not be in a position to be a viable product because using the web via a native browser accessing properly developed web content would be a pain free and perform any experience.
Premature optimization makes it harder to write fast and efficient code. That is because you guessed where the bottlenecks would be before you had enough information (which is why it was "premature").
The optimizations make the code more brittle and hard to change so that when you reach a position to measure where the bottlenecks are it is too complicated to do the proper optimizations.
I find it strange that for someone that is a relatively immune from any real scrutiny, and constantly claiming to be a bold thinker, PG is always so coy in his writing.
I would find PG's recent stream of ego driven rants much more enjoyable and potentially insightful if he would just say what he's really trying to say.
He might be trying to add strength to his arguments by making them somehow more general, but since these pieces always seem very clearly about a specific bone PG has to pick, the result is they read as some of the most cowardly essays I've ever encountered.
I was young when that was first announced and privacy wasn't as big deal or a concern as it is these days. I remember it making the news and me being excited about it and noticing how fast everything was. It took a day or two for my brain to catch up and ask "this means they have access to everything?"
So during their onboarding process they have a survey about your usage patterns.
One of the questions is about speed of switching between tabs and one of the answers is "Tabs are switched very fast (<1 sec)"
He might also be talking about many more ideas too: Dropbox, Boom, Lambda School, and another dozen ideas within YC that all seem surprisingly possible. You can criticize him for having skin in the game but you could equally commend him too: he puts his money where his mouth is.
hahaha, fuck, I remember ages ago, maybe 2013, when people were trying to get me to buy into the "web 2.0" craze -- "Web is the future!", "web can scale!", "nobody will install apps!", "your stuff will be accessible everywhere!"
funny how they all turned out to be mixed bags. the best part? people telling me web apps are "lighter". even back then I knew that was a hot load of bullshit.
good to know the other shoe has dropped and we're really going full-circle to thin clients/mainframes, but shittier.
I have trouble seeing how it's going to find a sustainable market except as business spyware/leak-prevention. Which is yet another reason I'm not a fan of the idea. In that capacity it may actually manage to survive and even thrive, but I'm not going to be happy about it.
The $50 subscription is to avoid flooding their servers while testing the software.
This product will likely starts as a corporate malware but once the beta is over and their tech is really working at scale (more difficult than you might think) they'll probably give it for free or very cheap.
My IMMEDIATE reaction to this was someone is gonna pay $50 a month to possibly make more money than that mining cryptos on the browser. JS crypto miners are already there now they just need to pay some numpty to run a browser somewhere with more resources. They're even offering GPU-level processing!
> This product will likely starts as a corporate malware but once the beta is over and their tech is really working at scale (more difficult than you might think) they'll probably give it for free or very cheap.
So you think it is a dragnet-surveillance get-acquired-for-our-data play, longer term? That's even worse, if so.
> “ People are not claiming that it is a bad idea because it is infeasible or not valuable, but because it is dangerous and also because it sounds technically ridiculous. (thin client inside thin client)”
This is the kind of asymmetric dismissal he’s talking about, and it’s not very good.
Dangerous? We run everything via cloud services and encrypted communication. “Sounds technically ridiculous” - so did probably every modern technological idea when it was new “you put your database in some other company’s servers?!”.
Yeah I mean I’m with you - I like local control and I think Urbit is cool because of this.
That said, the potential for mighty is real and dismissing it for these reasons is dumb. The same logic would have also dismissed nearly all modern wildly successful tech companies.
It’s quite illuminating to see the very forum that PG created is calling him out on his shortcomings. Almost all the recent posts by PG were panned down by HN hivemind and PG seems to take no hints from the wisdom of the HN crowd. The case with MightyApp is the latest in the saga.
> In case someone missed the story, Paul Graham is indirectly talking about the feedback received by Mighty App.
This is the subtext I came here for, thanks.
I'm undecided on Mighty, I don't have a strong opinion. I wouldn't consider it "revolutionary", unless perhaps they have a vision way beyond the product they've talked about publicly, but it would seem a bit unfair to use that as a defence against criticism.
One thing I did notice is that a few people involved or indirectly involved with Mighty seem to have a problem with the criticism on HN, which seems really strange to me if you consider yourself an open individual who supports freedom of discussion. You don't need to engage with that criticism directly, but presumably it's useful to be criticised - you'll need answers to those arguments. And if the criticism isn't constructive, well, who cares? There's always some noise.
It feels to me like "oh yes I want people to freely debate ideas and criticise them but just not the ideas I like".
It's funny how this context changes the entire the article. I didn't realize that this was related to Mighty and thought it was a good "call to arms" for domain experts in non-tech industries to disrupt their markets.
> People are not claiming that it is a bad idea because it is infeasible or not valuable, but because it is dangerous and also because it sounds technically ridiculous.
An idea isn't bad if it's valuable and built on ridiculous / dumb / silly / simple / old technology. A product should be measured on output, not input... in fact I'd go as far as to say that we (hackers) should celebrate ideas that deliver incredible value with such a simple implementation.
I agree that being ridiculous is not sufficient to discard it.
Many things are considered ridiculous or impossible before they work.
The main point is that yes it could work and be valuable, but not for the end-user. It would be mostly valuable because of the transfer of power from the user to the server.
We should know better, we should learn from the lessons of the recent past, we should be more careful before giving away our freedom (power is highly correlated to individual freedom)
Then he'd have to face specific arguments. This way it looks like it's some general wisdom, and anyone who isn't enthusiastic about Mighty App can be painted as the unsavory characters he invents. How easily this army of strawmen is burnt to a crisp!
The Jealous Nerd:
> One reason they do it is envy
The Hipster:
> it's an easy way to seem sophisticated
The Dark Ages Inquisitor:
> Darwin's harshest critics were churchmen
The Luddites and Sheep:
> the sheer pervasiveness of the current paradigm
This replies to none of the legitimate criticism, and manages to be more dismissive by filing everything under "crabs in a bucket, extra salty".
These "Mighty is bad" arguments are actually arguments in support of Mighty.
They are arguing that technology shouldn't need Mighty. The mere fact they are complaining about the state of the world means that there is a problem that needs solving.
Imagine not having to think about cross browser comparability. Customer wants to use internet explorer? Just head to ie.example.com, for a mighty version of a given website.
What happens when Mighty's competitors offer slightly different browser emulations and you now have to build your web app for Mighty and Weakly and Mediocry as well?
Mighty is such an obviously good idea, and if you've spent decades on the cutting edge of the web you'd understand why. People's demands for more powerful web apps are for all intents and purposes infinite, and it is much more pleasant as a developer to develop once, run anywhere, than to develop an app for low powered clients and a different one for high powered ones.
I'm rocking a 8 year old laptop. I also use NoScript, which means I'm pretty aware of what code webpages are actually running on my computer.
What I've seen is that the things that make it really chug have very little to do with how much actual functionality the website has. Beautiful CSS animations usually aren't too bad, either, even on my old computer. The real performance hogs tend to be things like scrolljacking, telemetry, and dynamic ad placement.
The interesting outlier here is gmail. Gmail fascinates me, because it keeps getting slower and slower, without, as far as I can tell, actually gaining any new capabilities.
Gmail's so bad now that I only use "classic HTML" Gmail in the browser, and native clients (Apple's Mail, for example). I have no idea how they managed to make a relatively simple "web app" so huge and heavy. You could add all of full-fat Gmail's features to "classic HTML" Gmail for very little cost in bundle size and active resource use—though the result might not be an "app" in many folks' opinions, I guess. I just know navigating classic Gmail, with its "bad" full-page loads, is way faster than the "efficient" AJAX-style crap on normal Gmail.
Mobile Gmail's not just heavy—it's broken. Whatever stupid, misguided bullshit they're doing with scrolling makes it register clicks where they weren't intended if I'm not super careful.
I want to visualize 1TB of single cell RNA seq data in a browser tab, then open a new tab, change some params, and share the link to a colleague. I want it to be instant.
In the non-multiomics world I also make browser based data visualization software (most recently worked on this, for example: https://ourworldindata.org/coronavirus-data-explorer). I want to load up 500GB and facet on multiple dimensions and then share the link and have it all run instantly.
I'm not talking about Gmail and Text editors here. M1s solved that.
> I want to visualize 1TB of single cell RNA seq data in a browser tab, then open a new tab, change some params, and share the link to a colleague. I want it to be instant.
But why? Wouldn't you be better suited writing a native application that would do this better? This seems like you're building a problem for this solution. Most people use a browser for Gmail and to read news. Your example already works really well on the browser on my phone
The "but why" question was not pointed at visualizing 1TB of single cell RNA sequence data or collaboration. It was pointed at why you'd want to do it in a browser? Why wouldn't you be able to make links and share them from a native application that they can then open in their application?
Maybe I'm missing something in your problem statement but are you downloading all that 1TB in your browser window and changing parameters? If yes Mighty has to download that data too and considering that each user is sequestered, someone you're sharing this with will have to download all that too. If you're not and the visualization is running somewhere else and the result streamed to your system how does Mighty solve that problem?
This is slightly what I'm thinking, for that particular use case.
The idea that you're going to shove 1TB of data down to the client strikes me as slightly unhinged. Even if we assume you can achieve a sustained transfer rate of one gigabit per second, it's going to take over 2 hours to get a terabyte shoved down to the client. I'm guessing, though, that the actual vizualization is nowhere near one terabyte. Data's going to have to be aggregated somehow, because no computer monitor can display a terabyte worth of information all at once. Even a 4K monitor would fall short by many orders of magnitude.
It would be much faster, and, I think, simpler, to keep the data on a central server, have it generate the visualizations, and push them down to the client.
> The idea that you're going to shove 1TB of data down to the client strikes me as slightly unhinged
Yeah which is why I asked for clarification from them because it seemed really weird to assert that as a positive for the app?
> It would be much faster, and, I think, simpler, to keep the data on a central server, have it generate the visualizations, and push them down to the client.
Yeah I completely agree but in this case, Mighty would have next to no advantage over just loading the visualization on your own machine. Hence, again the confusion about asserting Mighty as a means to an end
Yeah, agreed. I can see Mighty as perhaps being useful as a band-aid to deal with someone else's poor, resource-hungry design.
But if you're the actual app developer, I'm just not seeing a good reason why you would want to deliberately incorporate this technology into your design. Why farm the sever-side rendering out to a middleman when you could just... do server-side rendering?
I'm sorry but can you please stop speaking in riddles and be clear about how you feel Mighty is solving this issue? I'm genuinely at a loss about how this is solving your use case but your answers are only cryptic. I truly want to understand how Mighty might help to solve real world problems
How do browsers make something distributed? How are browsers the only "magic platform" that allow you this? How is Mighty contributing to solving this problem? How does M1 matter here
The theoretical end you describe — one where we are all just writing web applications to run in Mighty - sounds incredible but ignores the reality. There will not be 100% or even close to 100% adoption of streaming browsing, especially if it costs $50(!) a month. And you can imagine everything else that will go wrong:
- mighty creates arbitrary APIs that allow more speed so now we have yet another build target
- it manages to become successful, so now competitors jump in and now we have multiple targets to build for.
- god forbid, competitors offer “free” versions that become even more invasive to your browsing
For me, the question becomes—if we must write applications to one target that's going to run on a server anyway—why oh why must it be web tech? The whole appeal of applications (not documents) in the browser was that they were (kinda, sorta) able to run on any platform. If your target is one browser on one OS, and you're writing an actual application, why would you subject yourself, your developers, and your product quality to an HTML + JS UI?
That sounds like an odd use of "skin in the game". "Skin in the game" would be if that thing failed and then he gets skinned, basically. Being invested in something with potentially lots of upside and worst case a bit of lost investment isn't like that.
No it's not regardless of the rest. It implies actual risk. Some rich dude losing a dime is not skin in the game, unless the phrase is meant as completely meaningless masturbation. "Skin" implies an actual impact, otherwise it's just "in the game".
Like .. what? Will people spit at him in the streets, if the company fails? Will they come with torches and pitchforks to his house and yell "it was a bad idea to run a browser in the cloud"?
It's interesting that you need any of this spelling out, especially as you're so irate about a turn of phrase, but even the post you're replying to here is politely spelling it out for you. Reputation _is_ money when you're a VC. You appear to be arguing with yourself and making up things to be angry about.
- Personally I don’t think PG is defending mighty app because he has skin in the game. I would think he is at a point where he doesn’t need to.
- Also IMO we should let mighty app (or for that matter other crazy ones) play out. We all know little about the capabilities of individuals and what the future holds. So why to ridicule and prematurely declare certain death.
Couldn't disagree with you second point more: you shouldn't let dangerous tech "play-out". It's much easier to kill snakes just as they've hatched. Basically every bad thing about web and software in general was born as "play it out" and then it grew to big to be killed.
Hold it. I agree with you, but I'm having a language nerd moment. This feels like one of those "catch more flies with honey than vinegar" things, where although the point is made clearly, the metaphor is literally false (vinegar is incredibly good bait for catching flies).
> It's much easier to kill snakes just as they've hatched.
Aren't a ton of itty bitty snakes going to be harder to kill than one big snake? I've never tried to kill snakes of any age myself, so I could be way off.
And in this particular case, I don't think this is a valid defense.
First, he clearly has too much skin in the game to be credibly neutral about it.
Second, he avoids addressing the main critique about this "new tech".
People are not claiming that it is a bad idea because it is infeasible or not valuable, but because it is dangerous and also because it sounds technically ridiculous. (thin client inside thin client)
https://www.mightyapp.com/ https://twitter.com/Jonathan_Blow/status/1387101172230672389 https://twitter.com/cmuratori/status/1387645578067124224