Is there a single repo that has all of these "aha" images? I could see the clown right away, and the vines/plants in the 2nd example were what I thought first but organic shapes are harder to be sure about.
That also brings to mind that first exposure to this dataset affects the effectiveness of the rest of the dataset. If you're doing initial exposure, you'll definitely get the "aha" moment. But if all of the images in the dataset are of the same type, your brain quickly learns the pattern and the "aha" moment vanishes.
If they did their study on all of the images per test subject, the results after maybe the first 5 are basically useless for any definitive conclusions.
Please, just provide the answer. Maybe it's obvious but to people like me all I can think of is recipes for butter sauces for crab legs involving pine nuts. Which actually sounds quite good.
Unlike many other languages, English has grown because it's adaptable. It has almost as many borrowed words for advanced concepts as "native" words. It's hard to even distinguish anymore.
If anything, a solid counter argument can be made that Romance languages (descended from Latin) lack the flexibility of English and other Germanic languages.
Non primary English speakers frequently complain that English is more complicated than other languages. This is true. I'm a native speaker and only can read limited Spanish. Where I get hung up is the dependence on gender of objects. Similar experience with Japanese when I was studying that a few years ago.
I completely believe that primary language has a physical effect on the brain in terms of neural structure. It must have.
But since English is so adaptable, if there's a concept that is better expressed in another language we tend to adopt the words of other languages to express it.
However other languages seem to be less adaptable. For example, France has or had an official government ministry for decades to manage new foreign words entering the French language. To this day, there are newish specific French words for technologies coming from English speaking countries.
Another good example is some YouTube videos from India I've run across. (I turn on subtitles). But say the speaker is talking in Hindi. Many times more technical terms are English words or phrases that are freely interspersed with Hindi. They're borrowing the English words, with a bit of a Hindi dialect hitting the pronunciation.
Going back to Japanese, we see the same thing. I don't know if the JP gov has a language ministry.
But if you look at written Japanese text you definitely see that most numerology is written with western/English 0-9 characters mixed with katakana or hiragana. When you hear people speaking, and once your ear is oriented towards Japanese sounds, you can start to pick up on the adopted English words that are said with a native dialect emphasis.
> However other languages seem to be less adaptable. For example, France has or had an official government ministry for decades to manage new foreign words entering the French language. To this day, there are newish specific French words for technologies coming from English speaking countries.
It's not because French is not adaptable, it's because France wants to maintain the language as "pure".
They have the same in Quebec.
When a new word appears, they consider that there should be an equivalent in French instead of just using the original word. Yes, they are mainly doing this for English (there are no French word for tsunami or iceberg) because they assume that French will slowly disappear if they don't protect it.
Language imports are a poor substitute for grammatical flexibility. That's why French, for example, has limited need to import words directly: it can recreate the same meaning with native words. German is another great example, its grammar provides a lot of the flexibility that was lost in English. It is almost impossible to translate German philosophy into English without losing the natural flavor of word combinations that make German so adaptable.
I have a pc I built when the 2080 TI came out. How is Linux support for the supporting drivers for those cards today? The machine is still more than powerful enough for my needs but I haven't used it in a couple years because Windows really is just complete garbage. I'd like to be able to take advantage of new to moderately old hardware without dealing with Windows.
I've been running a 2080 TI with AMD 5600x on Linux Mint Cinnamon for three months 24/7 with no graphics issues. Previously ran the proprietary nvidia-driver-550 but now use the nouveau open source drivers. There are five choices for Nvida drivers either open source, closed or open kernal (up to 570/580 now). This was a complete switch off Windows 10, which I've only had to boot twice in three months, to transfer some data.
Every game I've tried on Linux was either gold or platinum on ProtonDB and ran fine so far. WINE worked for running a couple non-game apps. Lutris is another way to run programs but I haven't needed it yet.
Definitely try it if you have a machine sitting there. There is so much support for Linux and Mint on the web it was easy to answer any questions I had setting things up.
1070 TI works perfect on Arch for the past ~6 months with latest drivers (better than Debian stable!). This card is old enough that only the closed source drivers are supported, but it seems to work fine.
If it helps I used to use a gaming laptop for work that had a RTX 2060 mobile version, I was able to run some recent games like Elden Ring (including mods & online play), and some older but still demanding titles like Witcher 3. All of this without tinkering too much on a oob Ubuntu LTS install (I later switched to popOS because I don't like snaps that much).
Someone else recommended PopOSin other comments. I pulled up the page but haven't looked at it extensively. In what ways is it different from a recent Ubuntu LTS install?
I certainly won't be upgrading to this version. I already don't really like the current version and see no reason to inflict a Windows Vista-like experience on myself.
It sounds cool, but for usability its not great. Think about how reddit recalculates the results order as you're paging through, and you see items from the previous page show up on the next. Now imagine that happening in realtime. Maybe there's a link you want to read, but you get pulled away for 10 minutes. By the time you get back the link is higher or lower, or may be completely missing.
That is a good point, however there are design solutions around it. For example:
- Poll every X seconds instead of real-time
- Enable user to toggle real-time mode
- Load new posts in with a "+" button at the top to fetch the latest posts (like Twitter)
That's a good idea. Maybe bind that to a key (what about Ctrl-R). And maybe you can put this feature into UAs, since it would be useful for more websites.
oh boy I wrote one of these many years ago for HN.
Within like an hour or two pg emailed me asking me to stop. I didn't know it at the time, but HN was being run on a rusty potato and scraping the homepage every 5 or 10 seconds was causing significant load.
Afaict, HN is still running on a rusty potato. The software's written well, so it doesn't need to run on more than that. (What's going to happen to it? Someone links to it from HN?)
> one of the more important and trafficked properties on the Internet.
I like HN, but it's really only important within a very niche subset of the Internet, and it also doesn't have much traffic. There's like a single post submitted every two minutes. That's not much.
I think [being unable to handle peak load] does not imply efficiency. Unless perhaps this is a joke that I’m misunderstanding?
Nonetheless, i suspect that HN probably is quite efficient, just based on what I know about dang. Even so, the parent claim was that it was popular and important, not that it was efficient
That also brings to mind that first exposure to this dataset affects the effectiveness of the rest of the dataset. If you're doing initial exposure, you'll definitely get the "aha" moment. But if all of the images in the dataset are of the same type, your brain quickly learns the pattern and the "aha" moment vanishes.
If they did their study on all of the images per test subject, the results after maybe the first 5 are basically useless for any definitive conclusions.
reply