

Is he signed up to be an organ donor?
Is he signed up to be an organ donor?
Me? Reading that there’s a drop-in replacement function for the one that was deprecated, in the error message? Why I’d never!
Maybe the onus should be on LLM developers to filter out trash like this from their training datasets
At any rate, it’s extremely unhelpful to not include a version number at the very very least
On average, disposable plastic bottles shed microplastics much more prolifically than plastic water piping.
.loc and .iloc queries are a fun syntax adventure every time
It’s the same “I’ll respect you if you respect me” dynamic in an imbalanced-power system.
I’m screwing you over if I personally feel bad for what I’m doing to you (never happens, therefore I’m always fair). You’re screwing me over if you inconvenience me.
I guess you could consider someone who is staunchly whitehat with no exceptions to have a creed/code, where they consider the rules transcendent of any specific situation (e.g. nazi websites).
That’s generally true under the paradigm of profit maximization unless you reach some sort of insane tech breakthrough, which deepseek seems to have accomplished
Oh my bad. According to another commenter it is sandboxed though
They have Google services but through a third party wrapper called MicroG, which keeps it sandboxed to a degree that you can keep it from doing system-level actions like this
edit: not microG, as evidenced by the strikethrough I put in very soon after receiving the first of several replies clarifying the situation. I would encourage you to read one of them before adding your own. <3
Well, no.
In scenario A they are instantly vaporized. In scenario B they are brutally sliced into multiple pieces and crushed to death, rather painfully depending on the speed of the trolley.
You are on track A and the bomb is within sight. If you get the shit end of the 50/50, everyone in the diagram would be vaporized instantly
The cherry on top is Trump rescinding the CHIPS act which was supposed to bring more silicon electronics manufacturing to the US, meant to make our microchip supply more resilient. Not only is it cutting off one’s nose to spite one’s face, it’s also shooting ourselves in the foot (and then blaming Woke when the obvious consequences come knocking home).
You might not even need them, it’s more just an option if the unit wiggles around too much for your liking after dropping it in.
Definitely don’t use screws or nails! Glue should be fine but you might need a method to hold them in place while it dries. Masking tape would probably do the job just fine; in fact, you could probably skip the glue and just tape the blocks into the gap from underneath the countertop. The latter case would definitely require careful inspection of the cooktop to make sure you’re not covering up anything important.
The idea with the blocks/shims isn’t to hold up the cooktop structurally, just to keep it from sliding side to side (and only if you need them, it might be perfectly fine without). The weight should still be primarily on the granite itself in all cases.
Disclaimer: I never installed cooktops but did have to work with the spec sheets frequently at a previous job that included kitchen design work.
It’s a bit dependent on the specific model, and if there are any load bearing parts of the design that would normally fall onto the extra 2cm of countertop the new one is expecting. You should be able to determine that by looking underneath for anything of the sort.
I’d say in most cases you should be fine, as long as the flange covers the whole cutout, which it should in this case. If the new one feels loose, you could glue in some 1cm wood shims on either side just to keep it from sliding around.
Financial stress, no, but there can absolutely be other stressors that come with such a job or make it otherwise difficult, especially if one cares about the quality of their work and the lives of their patients. Doubly-so if insurance companies seem to be actively hampering those efforts and make one feel more like a cog in a profit machine than an expert in one’s field.
Just ask them to answer your question in the style of a know-it-all Redditor because you need the dialog for a compelling narrative or something
People developing local models generally have to know what they’re doing on some level, and I’d hope they understand what their model is and isn’t appropriate for by the time they have it up and running.
Don’t get me wrong, I think LLMs can be useful in some scenarios, and can be a worthwhile jumping off point for someone who doesn’t know where to start. My concern is with the cultural issues and expectations/hype surrounding “AI”. With how the tech is marketed, it’s pretty clear that the end goal is for someone to use the product as a virtual assistant endpoint for as much information (and interaction) as it’s possible to shoehorn through.
Addendum: local models can help with this issue, as they’re on one’s own hardware, but still need to be deployed and used with reasonable expectations: that it is a fallible aggregation tool, not to be taken as an authority in any way, shape, or form.
On the whole, maybe LLMs do make these subjects more accessible in a way that’s a net-positive, but there are a lot of monied interests that make positive, transparent design choices unlikely. The companies that create and tweak these generalized models want to make a return in the long run. Consequently, they have deliberately made their products speak in authoritative, neutral tones to make them seem more correct, unbiased and trustworthy to people.
The problem is that LLMs ‘hallucinate’ details as an unavoidable consequence of their design. People can tell untruths as well, but if a person lies or misspeaks about a scientific study, they can be called out on it. An LLM cannot be held accountable in the same way, as it’s essentially a complex statistical prediction algorithm. Non-savvy users can easily be fed misinfo straight from the tap, and bad actors can easily generate correct-sounding misinformation to deliberately try and sway others.
ChatGPT completely fabricating authors, titles, and even (fake) links to studies is a known problem. Far too often, unsuspecting users take its output at face value and believe it to be correct because it sounds correct. This is bad, and part of the issue is marketing these models as though they’re intelligent. They’re very good at generating plausible responses, but this should never be construed as them being good at generating correct ones.
Well if they die then they’re just suckers and losers like all the dead military servicepeople