• 1 Post
  • 555 Comments
Joined 2 years ago
cake
Cake day: September 24th, 2023

help-circle



  • They mean measure first, then optimize.

    This is also bad advice. In fact I would bet money that nobody who says that actually always follows it.

    Really there are two things that can happen:

    1. You are trying to optimise performance. In this case you obviously measure using a profiler because that’s by far the easiest way to find places that are slow in a program. It’s not the only way though! This only really works for micro optimisations - you can’t profile your way to architectural improvements. Nicholas Nethercote’s posts about speeding up the Rust compiler are a great example of this.

    2. Writing new code. Almost nobody measures code while they’re writing it. At best you’ll have a CI benchmark (the Rust compiler has this). But while you’re actually writing the code it’s mostly find just to use your intuition. Preallocate vectors. Don’t write O(N^2) code. Use HashSet etc. There are plenty of things that good programmers can be sure enough are the right way to do it that you don’t need to constantly second guess yourself.





  • AI is good at more than just generating stubs, filling in enum fields, etc. I wouldn’t say it’s good at stuff beyond just “boilerplate” - it’s good at stuff that is not difficult but also isn’t so regular that it’s possible to automate using traditional tools like IDEs.

    Writing tests is a good example. It’s not great at writing tests, but it is definitely better than the average developer when you take the probability of them writing tests in the first place into account.

    Another example would be writing good error context messages (e.g. .with_context() in Rust). Again, I could write better ones than it does. But like most developers there’s a pretty high chance that I won’t bother at all. You also can’t automate this with an IDE.

    I’m not saying you have to use AI, but if you don’t you’re pointlessly slowing yourself down. That probably won’t matter to lots of people - I mean I still see people wasting time searching for symbols instead of just using a proper IDE with go-to-definition.


  • Assembly is very simple (at least RISC-V assembly is which I mostly work with) but also very tedious to read. It doesn’t help that the people who choose the instruction mnemonics have extremely poor taste - e.g. lb, lh, lw, ld instead of load8, load16, load32, load64. Or j instead of jump. Who needs to save characters that much?

    The over-abbreviation is some kind of weird flaw that hardware guys all have. I wondered if it comes from labelling pins on PCB silkscreens (MISO, CLK etc)… Or maybe they just have bad taste.

    I once worked on a chip that had nested acronyms.



  • The evidence is that I have tried writing Python/JavaScript with/without type hints and the difference was so stark that there’s really no doubt in my mind.

    You can say “well I don’t believe you”… in which case I’d encourage you to try it yourself (using a proper IDE and use Pyright; not Mypy)… But you can equally say “well I don’t believe you” to scientific studies so it’s not fundamentally different. There are plenty of scientific studies I don’t believe and didn’t believe (e.g. power poses).



  • then why isn’t it better to write instead everything in Haskell, which has a stronger type system than Rust?

    Because that’s very far from the only difference between Haskell and Rust. It’s other things that make Haskell a worse choice than Rust most of the time.

    You are right in that it’s a spectrum from dynamically typed to simple static types (something like Java) to fancy static types (Haskell) then dependent types (Idris) and finally full on formal verification (Lean). And I agree that at some point it can become not worth the effort. But that point is pretty clearly after and mainstream statically typed language (Rust, Go, Typescript, Dart, Swift, Python, etc).

    In those languages and time you spend adding static types is easily paid back in not writing tests, debugging, writing docs, searching code, screwing up refactoring. Static types in these languages are a time saver overall.



  • No I disagree. There are some things that it’s really infeasible to use the scientific method for. You simply can’t do an experiment for everything.

    A good example is UBI. You can’t do a proper experiment for it because that would involve finding two similar countries and making one use UBI for at least 100 years. Totally impossible.

    But that doesn’t mean you just give up and say “well then we can’t know anything at all about it”.

    Or closer to programming: are comments a good idea, or should programming languages not support comments? Pretty obvious answer right? Where’s the scientific study?

    Was default case fallthrough a mistake? Obviously yes. Did anyone ever do a study on it? No.

    You don’t always need a scientific study to know things to a reasonable certainty and often you can’t do that.

    That said I did see one really good study that shows Typescript catches about 15% of JavaScript bugs. So we don’t have nothing.




  • I think we still must call this an “open question”.

    Not sure I agree. I do think you’re right - it’s hard to prove these things because it’s fundamentally hard to prove things involving people, and also because most of the advantages of static types are irrelevant for tiny programs which is what many studies use.

    But I don’t think that means you can’t use your judgement about it and come to a conclusion. Especially with languages like Python and Typescript that allow an any cop-out, it’s hard to see how anyone could really conclude that they aren’t better.

    Here’s another example I came across recently: should bitwise & have lower precedence than == like it does in C? Experience has told us that the answer is definitely no, and virtually every modern language puts them the other way around. Is it an open question? No. Did anyone prove this? Also no.


  • Just being open source doesn’t guarantee a project’s survival. If Google were to abandon it the most likely outcome would be a community fork that gets 100th of the development manpower it gets now, and most developers would abandon the platform leading to it’s effective death.

    But I also think it’s unlikely Google will abandon it. It’s actually quite good and quite popular now.


  • Definitely a high usefulness-to-complexity ratio. But IMO the core advantage of Make is that most people already know it and have it installed (except on Windows).

    By the time you need something complex enough that Make can’t handle it (e.g. if you get into recursive Make) then you’re better off using something like Bazel or Buck2 which also solves a bunch of other builds system problems (missing dependencies, early cut-off, remote builds, etc.).

    However this does sound very useful for wrangling lots of other broken build systems - I can totally see why Buildroot are looking at it.

    I recently tried to create a basic Linux system from scratch (OpenSBI + Linux + Busybox + …) which is basically what Buildroot does, and it’s a right pain because there are dependencies between different build systems, some of them don’t actually rebuild properly when dependencies change (cough OpenSBI)… This feels like it could cajole them into something that actually works.



OSZAR »