Hacker Newsnew | past | comments | ask | show | jobs | submit | pyr0hu's commentslogin

> Second, there's a decent sized market for cheap, unapproved HID/LED kits for older cars. They're often not aimed correctly.

This, so much this. I'm having no issue with new cars and their LEDs. The aftermarket kits that are installed on 1994 Swifts and Passat B5s are not at all configured properly. They just throw it on the car and "yay i can see more" and sometimes I even think that they are using their high beams. But no, it's just their incorrectly set up lights.


> I'm having no issue with new cars and their LEDs

funny, its the opposite for me. brand new SUVs are by far the worst offenders,


> No GitHub PR sync for stacks. Managing stacked diffs locally is great, but (a better version of) Sapling's PR syncing would be a huge value add. This is somewhat of a pain point for me directly, but even more so a weakness when I've tried to evangelize jj internally as a viable "stacked diff" solution (e.g. to be blessed by our eng tools team). Someone familiar and comfortable with Sapling (or just skeptical of jj) can easily point to this feature gap in a way that pretty much ends the conversation.

Can you explain this point in detail? I've been using jj and doing stacked PRs on GitLab using `jj git push --all`. I haven't used Sapling so I'm not familiar with it's way of doing stacked PR and I'm really just curious what do you miss from it.


Sapling has a way to manage multiple PRs, setting the target branch of each to its parent and adding information about the stack of PRs to each PR description.


What does this look like in practice?

It seems jj supports this workflow roughly as well as Sapling. If I have five PRs open with five feature bookmarks, changing an ancestor and pushing will update all PRs simultaneously. Github's PR view notices that the heads were updated and changes which commits are included in the review set.

Sapling shares the limitations of Github's review UI: reviews still include all changes (I think). The only nice bit is Sapling automatically submitting the PRs for you and adding the descriptive info?

As a nifty workaround, it looks like sapling recommends an external tool called reviewstack.dev to properly review its stacked PRs which show incorrectly by default on GitHub. So is there much difference?


Anti-cheats are not really compatible on Linux IIRC. Maybe there have been improvements on this front but I think this was the main issue for a lot of gamers. This and there were cases when they were getting banned for playing through Wine.

I once tried to set up a GPU passthrough setup to a Windows VM to play WoW but there were a ton of report that Blizzard just banned players for using QEMU VMs because they were marked as cheaters.


Could some game programmer say if it's true that kernel level anti cheat is just bad programming?

Primagean recently said that in a video commenting PewDiePie's "I switched to Linux" video. While he's apparently a good programmer (he worked at Netflix), he uses Vim, so I don't trust him. Edit: part about vim is an edgy joke.


Weird reason not to trust someone, and I think prime is a decent programmer.

I work in AAA gamedev and have deployed kernel level anti-cheats before, and I’m aware how unpopular they are; so, sorry for that… you would also accuse us of “bad programming” if there was an overabundance of cheaters that went undetected and/or uncorrected.

The answer is unfortunately complicated, the kernel level anti-cheats themselves aren’t necessarily poorly written, but what they are trying to do is poorly defined, so theres a temptation to put most of the logic into userland code and then share information with the kernel component- but then it’s dangerous for the same reason that crowdstrike was.

Not doing endpoint detection is also a problem because some amount of client trust is necessary for a good experience with low input latency. You get about 8ms in most cases to make a decision about what you will display to the user, that’s not enough time to round-trip to a server about if what is happening is ok or not. Movement in particular will feel extremely sluggish.

So, its a combination of kernel level code being harder in general (malloc, file access etc; are things the kernel gives you in user land after all), the problem space being relatively undefined (find errant software packages and memory manipulation), not being able to break out of the kernel level environment for an easier programming and iteration experience and trying to not affect performance.

Lots of people think they can do it better, I’m happy to hire anyone who actually thinks they have a clue, it’s a really hard problem honestly and the whole gamedev industry is itching for something better: even us gamedevs don’t like kernel level anti-cheat, it makes debugging harder for ourselves too and introduces hard to reproduce bugs.

PS; sorry if I’m not being eloquent, I am on vacation and typing from my phone.


This is well written and quite easy to understand. (I only have cursory knowledge of programming.)

However, what if Primeagen meant that HAVING to IMPLEMENT kernel level anti cheat is a symptom of bad programming, and not the anti cheat per se? (that is, with good enough programming, it could somehow be avoided).

And kudos to you. I appreciate people in game dev, they can get a lot done in short time. I haven't played mmo fps since battlefield 3, and it wasn't that bad then. But I've heard that without kernel level they would be unplayable.

Thank you for your time!


The reason why you need kernel-level anti-cheat for it to be meaningful is because it necessarily needs to sit on a level lower than cheats themselves; and cheats can be very advanced these days.

Long term I'm kinda hopeful that this is something that will be mitigated through AI-based approaches working to detect the resulting patterns rather than trying to detect the cheat code itself. But this requires sufficiently advanced models running very fast locally, and we're still far from that.


The cheaters are very good these days. They will happily sit in the kernel space to hide from the game if needed, because people pay a lot of money to cheat developers to be able to cheat.


> so theres a temptation to put most of the logic into userland code and then share information with the kernel component- but then it’s dangerous for the same reason that crowdstrike was.

I don't understand, how could crowdstrike have avoided their issues by putting more code in the kernel? Or am I misreading your statement?


The crash was caused by a data parsing issue for the code in the kernel (the heuristics database).

If they had not tried to parse data inside the kernel it would not have been an issue.


Good faith question: why is the server not the source of truth? With local interpolation for things like character movement, reconciled in heartbeat updates?


FD: Still on a phone on vacation. :)

The reason is the round trip time mainly.

Server corrections will feel like “floaty” or “banding” behaviour, we used to do that and people get upset because it “feels” wrong.


Not all cheating sends "bad data" to the server. Cheats like wallhacks or aimbots are purely clientside and can't be detected on the server


The opposite is true. He uses vim therefore I trust him.


The two most widely used anti cheat application battle eye and easy anti cheat both natively support linux but game developer have to check a box to enable it.

About 40% of games that use anti cheat currently work on linux. Getting banned for using wine is very rare because anti cheat that don't support linux would complain about not running an prevent you from even joining a game to get banned.

https://areweanticheatyet.com/


Can agree. We are awaiting for a response on our DUNs number. They said we have to provide a DUNS number, which we have, and it has to be exactly 9 characters long. Ours is 10. Apple accepts it, Google does not. Even the DUNS lookup site only finds our company using the 10 characters number.

Google gives no response just extended the deadline until they remove our account for not providing the DUNs number.

Funny thing is that our number starts with a zero so theoretically it could be 9 characters long but the official lookup requires the 0 prefix


Issues with PHEV in my experience that people CBA to charge their car when the battery depletes and will treat their cars as if they are ICE only. And in that case they just weigh more, put more pressure on the road and the battery takes up quite the bit of cargo space.


Getting 50 extra miles in your garage or a 15 minute pitstop is a lot easier than charging an EV to full. Each 50 mile charge is about 2 gallon saved, and people like to save $10 every few days if possible.

PHEVs are lighter than full EVs, and incentivize the manufacturer to build a smaller car. (EVs are longer because the batteries take up lots of horizontal space). If Mazda's experiments pay off, then a wankel [1] engine PHEV will be space efficient and sustainable. Now not everyone needs to tow, but EVs are horrible at towing. A PHEV is an ideal long-term alternative to petrol for heavy-hauling use cases.

I firmly believe fully emissions-free vehicles will be the future. But the powergrid & charging infrastructure for 100% EV are at least 10-20 years away. Until then, PHEVs should be encouraged as a stopgap, especially for countries without easy ways to generate renewable energy.

[1] https://www.youtube.com/watch?v=-3gzQVGEqF4


I calculated how much our savings would be if we buy a PHEV instead of a new ICE (or a very efficient mild hybrid like the civic or a corolla) and we wouldnt get even after 5 years, because buying a PHEV costs that much more.

Of course this is based on our car usage patterns so its really subjective.

I agree with you on the powergrid and charging infrastructure. A counterpoint i rarely see is if everyone would switch to EVs or PHEVs in 5 years, the electrical infra would collapse. And the electricity price world skyrocket and it would not be this cheap as it is now. So we should keep that in mind when we calculate the savings.


> A counterpoint i rarely see is if everyone would switch to EVs or PHEVs in 5 years, the electrical infra would collapse.

Electricity grids are massively overbuilt and underutilized because they need to deal with a big spike of power usage in the early evenings. So if grids became strained by EVs, it would be relatively easy to fix this: just encourage or mandate time of day tariffs.

The price signal would encourage people to charge outside of peak times. The improved utilization of the grid may even reduce distribution costs per kWh.


There are many countries where this is not true. Look up loadshedding in South Africa, for example.


Over the last 30 years, South Africa's generation capacity has crumbled from 37GW to 28GW. So yes, sure, electrifying anything there is not going to work. But I'd say that's a completely different category of problems.


Why are people making this claim?


PHEVs have the potential to lower emissions, but humans are lazy based on the data. Ergo, you have to engineer around the human (support BEVs, do not support PHEVs through policy).

https://cars.usnews.com/cars-trucks/features/phev-owners-not...

https://blog.ucsusa.org/dave-reichmuth/plug-in-hybrids-are-t...


I don't think these studies are saying what you think they are. The incentives are clear for a PHEV that you can plug in yourself in your home. its cheaper and not hard to do.

This is a very strong prior, and with a little digging you can find out that these cars are being bought by tax credits, and then used by people who do not have access to chargers. Either because it is a rental or because it is a company car. If it is a company car, the reimbursement process for gas is easier, and why would I go through the hassle if I'm not the one saving money?

The case for PHEVs is very strong, it is a much more economical use of lithium battery capacity, is cheaper to operate, produces less CO2, and can be operated like an ICE vehicle in a pinch.

They strategically dominate EVs. Its absurd to suggest otherwise


If you have data demonstrating strong EV use of PHEVs (vs defaulting to ICE most of the time), provide it, but it is absurd to propose these suboptimal vehicles will be maximized for low emissions use based on human behavior.

It doesn’t matter how strong the case is for PHEVs if the data doesn’t conclusively demonstrate they’re being used appropriately to minimize emissions. That’s just hope, and hope is not a strategy. Frankly, PHEV tax credits should be something like revenue you have to recognize over time, only provided when proven they’re being used in the manner desired (versus at time of purchase, after which you might not ever even plug the vehicle in).


I will not provide data, your data shows it quite well enough. The mechanism by which PHEV's achieve strategic dominance is more than enough evidence.


Can you give an example for the permission issues that you had with GQL and would've been easier in REST? Genuinely curious, as I'm implementing a GQL backend with simple permission handling and haven't run into anything yet, but I wanna know what could await me


With REST I can fairly easily filter out any data based on roles/permissions either at query time or before turning it into JSON. With GraphQL I need that info deep in the resolver logic and for nested data I don't want to fetch the name of a person if they calling user doesn't even have access to see that user (and I don't want to fetch the user and their name only to delete it from the response later). GraphQL, being so open-ended to what you are fetching, means I have to make sure to plug a ton of holes whereas REST I have a specific query, there is no way to fetch nested data (unless I specifically allow it via GET/POST params). I can easily say "If role X -> use this query, if role Y -> use this query, etc", I found that very difficult to do in GraphQL.

GraphQL feels like magic when you first start with it (which should be a red flag) but once you need to support more roles or sets of permissions and as your business logic starts to creep in things go haywire. In my experience things that are easy up front are unmanageable once business logic gets added in. For straight CRUD it's amazing but very rarely do our apps stay as just CRUD and that's when things fall down. For example, on create of a new user I need to send a welcome email. It's been 5 years since I was working on that GraphQL project but I have no clue how we'd handle that. I'm sure there is some kind of a event system we could hook into but with REST I just have a simple endpoint that saves the data to the DB and then sends and email (or puts it in a queue), way easier than in GraphQL. Again, fetching and updating data is easy until you need to handle edge cases. I have the same feelings about Firebase and friends. Feels like magic at the start but falls down quick and/or becomes way too complicated. GraphQL feels like DRY run amuck, "I have to keep writing CRUD, let me abstract that away", ok but now if you need special logic for certain use-cases you have a mess on your hands. Maybe GraphQL has ways to solve it but I'll bet my hat that it's overly complicated and hard to follow, like most of GraphQL once you get past the surface.

I'd love to see a "Pet Store" (I think that's the common example I've seen demo'd in REST/Swagger/GraphQL/etc) example with heavy restrictions based on different users/roles. It's like using the "Todo app" example in a framework, sure that works and is straight forward, I want to see how you handle the hard stuff and if it's still easy.


> with REST I just have a simple endpoint that saves the data to the DB and then sends and email (or puts it in a queue)

With GraphQL you can just do the exact same thing? I do this all the time. I don't understand how you wouldn't be able to do that.


Let's say you add a user object to your graphql. It's only so the viewer can inspect themselves (i.e. the current authenticated user). Maybe this is for a settings page or something.

A while later, suppose someone adds some connection from user to, say, orders. The person who added orders to users was kinda lazy and assumed (somewhat correctly, at that moment anyway) that permissions weren't an issue. So there's no additional permission checking when fetching the orders connection.

Now, suppose 6 months pass. Some other engineer is now implementing reviews. Each review must have an author. What do ya know, there's already a user object available in Graphql. How convenient!

Now every user can inspect all orders of every other user, if that user has left a review.

Mistakes like this are all too easy with graphql, and is the number one reason I would never consider using graphql without a query whitelist.


So GraphQL is bad because you didn't implement authorization, which you should have been doing regardless of the API technology you use?


I am just pointing out that it is easy to make mistakes like this which would be, in this commenters experience, more obvious with a REST API.

In the equivalent REST API you would probably have to go far far out of your way to expose users order information in a reviews API, whereas in graphql that is the default.

In a typical REST application, it is enough to ask "does this user have permission to take this action".

In graphql, the question is rather different. It is "does this user have permission to access this data irrespective of the action they are taking", and you have to both ask that question and answer it correctly for everything in your graph.


If you go back to the early stuff coming out of Facebook about GraphQL, it was designed to roll up all the REST services (or similar) into a single request for high latency clients. Occupying what has become known as the backends for frontends (BFF) layer.

In theory, it should be just as obvious either way as your actual services are going to be REST (or similar) either way. I recognize that some people have started using it as a poor man's SQL, but that's not really what it is for.


In the wild, I primarily have seen graphql implemented instead of, or perhaps next to, REST. Not on top of REST.

I'm not sure what you mean about a poor man's SQL. Whether it's backed by micro-services via REST, or just a graphql API in a single app, the value prop for frontend<>backend communication is the same. It's not "using graphql wrong" to not have a micro service architecture.


Using it as a poor man’s SQL was addressed, but using it that way doesn’t mean that’s what it is for.


This can happen pretty easily with ORMs also, which is commonly used together with REST. Not that it really detracts, these risks are quite real.


>Now every user can inspect all orders of every other user, if that user has left a review.

Lmao that's just bad development, bad testing and the exact same thing can happen when using rest. "The dev wrote code and forgot to take permissions into account" happens to everyone.

And unlike rest, a properly written schema helps ensure they mostly do the right thing - even without strict permissions check, it should be obvious to anybody that they can't just write an `orders` resolver that does a `select * from orders`...


In practice, there are innumerable paths one can take through a complicated graph, and it is not reasonable or possible to test them all.

The cure is, like you say, writing a proper resolver. This form of permissions error most frequently happens when there is not a dedicated resolver (graphql-ruby, for example, makes it trivial to make a connection without a dedicated resolver).

I don't think this is as easy of a mistake to make with a typical REST application. In no normal universe would you return orders data in a reviews API, and the mistake would be much more obvious during development since you don't have to explicitly select the data you fetch from a rest API (so you are more likely to notice the extra information).

Whereas during development in graphql, the permissions error would be hidden because you probably would not select extra data for no reason.


We also migrated to ClickUp (though not from Jira) but while it's better than what we used before, it still feels... I don't know, sluggish. Loaders everywhere, lists are jumping when you scroll. Sometimes I click on a task, the detail modal opens up and there is a noticable delay while the data loads. Typing a / in the description which opens up the command panel or integration panal or whatever it called, freezes everything for like 100ms or more.


We use tailscale for this exact use case and has been working flawlessly so far. You can even set up ACL lists as a firewall.


What do you use for freezing food if not plastic containers? Heating and microwaving, okay, I can work around the plastic containers/plates, but the freezer I have no idea how I'd do that. Especially that the freezer's casing is still plastic


Glass containers work fine and come in the same shapes as plastic containers do. They are heavier and take a little more space due to being thicker, but it's a small difference.

They do have plastic lids most often, but the lid doesn't have to touch food.


Yup, the IKEA one’s are more than good enough. And they’re borosilicate so you can even use them in the oven.

The plastic issue is also why I think people doing sous-vide are insane. You’re vacuum sealing meat into plastic and then giving it a nice long leeching in hot water. Sometimes with acidic foodstuffs!


Cheers, thanks for the info! So as long as the plastic does not touch the food, I'm basically good to go?


Aluminium trays, like that: https://www.emballagefute.com/img/plat-a-four-alu-sertissabl.... Yes, it’s non-reusable, pick your poison :/.


I agree that using upscaled graphics in PCSX2 make them more playable on a modern large TV (Looking at you Gran Turismo 4 in 8k60fps) but I think - because the games _were_ designed to be played on old CRT TVs with scanlines - using an older TV makes the games feel much better. I'm planning making a retro corner with an old TV and older consoles instead of just playing on scaled up emulators.


>using an older TV makes the games feel much better

Outside a handful of enthusiasts, the vast majority of consumers are not gonna start buying CRT TVs back in their homes just to play a few vintage video games.

Especially that most CRT TVs are garbage by today's standards, and the ones that are indeed good are rare and sought after and command huge premiums on the used market.

Emulators are by far the more sensible solution for most even if they can't replicate the CRT "effect".


What makes a CRT "good" by today's standards?


Insane FPS refresh rates. A different lighting technique that kind of blurred together the pixels in a vague antialias. Less tearing than their potentially cheap TV.


Clarity, brightness, size


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: