You got me in the first 3 quarters, not gonna lie!
You got me in the first 3 quarters, not gonna lie!
There are cases where instead of origin/master..HEAD
you may want to use @{upstream}..HEAD
instead to compare with the upstream of your current branch. It’s unfortunately quite unknown.
I absolutely agree that method extraction can be abused. One should not forget that locality is important. Functionnal idioms do help to minimise the layer of intermediate functions. Lamda/closure helps too by having the function much closer to its use site. And local variables can sometime be a better choice than having a function that return just an expression.
Good advice, clear, simple and to the point.
Stated otherwise: “whenever you need to add comments to an expression, try to use named intermediate variables, method or free function”.
I never understood why python won agaist ruby. I find ruby an even better executable pseudo code language than python.
Read your own code that you wrote a month ago. For every wtf moment, try to rewrite it in a clearer way. With time you will internalize what is or is not a good idea. Usually this means naming your constants, moving code inside function to have a friendly name that explain what this code does, or moving code out of a function because the abstraction you choose was not a good one. Since you have 10 years of experience it’s highly possible that you already do that, so just continue :)
If you are motivated I would advice to take a look to Rust. The goal is not really to be able to use it (even if it’s nice to be able able to write fast code to speed up your python), but the Rust compiler is like a very exigeant teacher that will not forgive any mistakes while explaining why it’s not a good idea to do that and what you should do instead. The quality of the errors are crutial, this is what will help you to undertand and improve over time. So consider Rust as an exercice to become a better python programmer. So whatever you try to do in Rust, try to understand how it applies to python. There are many tutorials online. The official book is a good start. And in general learning new languages with a very different paradigm is the best way to improve since it will help you to see stuff from a new angle.
I would have liked a link to the LKLM thread. Usually they are quite informative
I use a 42 key layout modified from bépo (french dvorak inspired layout) with the altgr layer of ergol. Go check this altgr layer it’s awesome for programming, and there is a version compatible for qwerty and lafayette.
╭╌╌╌╌╌┰─────┬─────┬─────┬─────┬─────┰─────┬─────┬─────┬─────┬─────┰╌╌╌╌╌┬╌╌╌╌╌╮
┆ ┃ ¹ │ ² │ ³ │ ⁴ │ ⁵ ┃ ⁶ │ ⁷ │ ⁸ │ ⁹ │ ⁰ ┃ ┆ ┆
┆ ┃ ₁ │ ₂ │ ₃ │ ₄ │ ₅ ┃ ₆ │ ₇ │ ₈ │ ₉ │ ₀ ┃ ┆ ┆
╰╌╌╌╌╌╂─────┼─────┼─────┼─────┼─────╂─────┼─────┼─────┼─────┼─────╂╌╌╌╌╌┼╌╌╌╌╌┤
· ┃ │ ≤ │ ≥ │ *¤ │ ‰ ┃ *^ │ │ × │ *´ │ *` ┃ ┆ ┆
· ┃ @ │ < │ > │ $ │ % ┃ ^ │ & │ * │ ' │ ` ┃ ┆ ┆
· ┠─────┼─────┼─────┼─────┼─────╂─────┼─────┼─────┼─────┼─────╂╌╌╌╌╌┼╌╌╌╌╌┤
· ┃ │ ⁽ │ ⁾ │ │ ≠ ┃ */ │ ± │ — │ ÷ │ *¨ ┃ ┆ ┆
· ┃ { │ ( │ ) │ } │ = ┃ \ │ + │ - │ / │ " ┃ ┆ ┆
╭╌╌╌╌╌╂─────┼─────┼─────┼─────┼─────╂─────┼─────┼─────┼─────┼─────╂╌╌╌╌╌┴╌╌╌╌╌╯
┆ ┃ *~ │ │ │ – │ ┃ ¦ │ ¬ │ *¸ │ │ ┃ ·
┆ ┃ ~ │ [ │ ] │ _ │ # ┃ | │ ! │ ; │ : │ ? ┃ ·
╰╌╌╌╌╌┸─────┴─────┴─────┴─────┴─────┸─────┴─────┴─────┴─────┴─────┚ · · · · · ·
If you have references explain why and how that it’s easier to port C to a new architecture by creating a new compiler from scratch than to either create a backend for llvm (and soon gcc) or to create a minimal wasm executor (like what zig is doing) to this new architecture I’m interested. And of course I talking about new architectures because it’s much easier to recreate something that as already be done before.
I’m not familiar with C tooling, but I have done multiple projects in C++ (in a professionnel environnement) and AFAIK the tooling is the same. Tooling to C++ is a nightmare, and that’s and understatement. Most of the difficulty is self inflicted like not using cmake/meson but a custom build system, relying on system library instead of using Conan or vcpkg, not using smart-pointers,… but adding basically anything (LSP, code coverage, a new dependency, clang-format, clang-tidy, …) is horrible in those environments. And if you compare the quality of those tools to the one of other language, they are not even close. For exemple the lint given by clang-tidy to the one of Rust clippy.
If it took no more than an hour to add any of those tools to a legacy C project, then yes it would be disingenuous to not compare C + tooling with Rust, but unfortunately it’s not.
With Bram Moolenaar death, I sincerely think that vim will no longer be able to play catch-up with nvim. Bram Moolenaar did an amazing job with nvim, but with its death I think that vim is going to be an editor of the past, just like vi is an editor of the past. And nvim is its successor since its where the developers have moved.
I never had to use this estimate in front of a client, but if I had, I would decompose it first before giving the total estimate. If there is about 10 items to do per button, so 10 buttons would be a hundred complexe tasks. So let say that it take an hour per task, but since we are fast we can do 10 a day. So suddenly 10 working days, or said otherwise 2 weeks don’t seems unrealistics for this apparently simple 10 buttons task.
As a rough estimation, if you include everything (apperance, discussion, functionality, interaction with other controls, …) I would say that every single input field or button is about a day of work. And then you start to realise how many buttons there is in any GUI and how much it will cost.
Usually when people say “I suck at maths”, it means that they are bad at doing manual calculus. Maths is extremely useful in programming, but it’s absolutely not the same kind of math. I don’t think that the grade you had in math at school will influence in any if you will be good or bad in programming.
There take on what they call capabitilites is very interesting. Basically anything that would make a function non-pure seems to be declared explicitely.
A computational effect or an “effectful” computation is one which relies on or changes elements that are outside of its immediate environment. Some examples of effectful actions that a function might take are:
- writing to a database
- throwing an exception
- making a network call
- getting a random number
- altering a global variable
Interesting take but I think you are right. It’s indeed critical to know how you product is used nowadays.
2019, so 4-5 years ago so not that recent but not ancient either. But unfortunately tutorials have not been updated.
I would say that the biggest benefit of git switch
is that you can’t switch to a detached state without using a flag (--detached
or -d
). If you do git co $tag
or git co $sha-1
you may get at one point the error “you are in a detached state” which is ununderstable for begginers. To get the same error with git switch
you must explicitely use git switch --detached $tag/$sha-1
which makes it much easier to understand and remember that you are going to do something unusual.
More generally it’s harder to misuse git switch
/git restore
. And it’s easier to explain them since the only do one thing (unlike git checkout
which is a mess !).
So if it’s only for you git checkout
is fine, but I would still advice to use git switch
and git restore
so you will have an easier time to teach/help begginers.
If you try to learn git one command at a time on the fly, git is HARD. If you take the time to understand its internal data structure it’s much, much easier to learn. Unfortunalely most people try to do the former because it works well (or better) for most tasks.
I can’t recommand enough the git parable.
I am always doubtful when people say that accessing information inside git is hard. I totally agree that defaults in git can be improved (and they are, git restore
and git switch
are a much better alternative to git checkout
that I no longer use). So let’s review the section “A Few Reasons Why SQLite Does Not Use Git”:
“Git does not provide good situational awareness”
git log --graph --oneline --author-date-order --since=1week
Make it an alias if you use it often. Alias is what helps you create your own good default (until everyone uses the same alias and in that case it should be part of the base set of commands).
“Git makes it difficult to find successors (descendants) of a check-in”
git log --graph --oneline --all --ancestry-path ${commit}~..
Likewise you could consider making it an alias if you use it often. Aliases can also be used as a post-it to help you remember what are the command that you find useful but you only use once in a blue moon!
The mental model for Git is needlessly complex
I may agree about that one. For reference, this is what the article says:
A user of Git needs to keep all of the following in mind: The working directory The “index” or staging area The local head The local copy of the remote head The actual remote head
If git fetch
was run automatically every so often, as well as git push
(of course in a personal branch), then this model could be simplified as
And integrating your changes (merging/rebasing) should probably be exclusively done using a PR-like mechanism.
Git does not track historical branch names
I’m skeptical about the usefulness of this. But since git was my first real vcs (10 years ago), it may just be that I have not used a workflow that took advantaged of persistant branches. I assume that git annotate
could be a solution here.
Git requires more administrative support
most developers use a third-party service such as GitHub or GitLab, and thus introduce additional dependencies.
That’s absolutely true but I’m not sure it’s a real issue. Given how many strategies there are for CI/CD (and none is the definitive winner yet) I do think that being able to select the right option for you/your team/your org is probably a good idea.
Git provides a poor user experience
I highly disagree about that xkcd comics. Git is compatible will all workflows so you have to use a subset of all the commands. Of course you will have more commands that you never use if a software is usable for all the workflow that you don’t use. But you need about 15 commands to do stuff, 30 to be fluent, and some more to be able to help anyone. Compared to any other complex software that I use I really don’t think that it’s an unreasonably high count. That being said I totally agree that git from 10+ years ago was more complex and we should correctly teach what is needed to junior. HTML/css/js is a nightmare of complexity but it doesn’t stop 15 years old kid with no mentoring to build cool stuff because you don’t need to know everything to be able to do most of the things you may think of, just a good minimal set of tools. And people should definitively take the time to learn git, and stop using outdated guide. Anything that don’t use git switch
, git restore
and git rebase --interactive
and presents you have to inspect the history in length (git log --graph
or any graphical interface that show the history in a graph, git show
, and more generally than you can filter the history in any way you want, being by author, date, folder, file type, …) is definitively not a good guide.
To sum-up, I think that from this presentation fossil seems more opinionated than git which means that it will be simpler as long as your workflow exactly matches the expected workflow whereas using git requires to curate its list of commands to select only the one useful for yours.
DRY and YAGNI are awesome iif you also practice YNIRN (You Need It Right Now)! Otherwise you just get boilerplate of spaghetti