Here’s one I found once a long time ago and keep thinking of randomly. jynx on Perlmonks explains what makes and breaks an Obfuscated Code entry. Does anyone still write Perl? Does anyone still write (intentionally) obfuscated Perl? Still, I really like the way he offers examples and counter-examples of each principle.
2) pack/unpack is not obfuscation
The reason i list the counter-example is because it is not unpacking anything like what you think at first glance. While the obfu itself does need some work, that is an acceptable use of unpack. On the other hand, looking at the example we see a fairly common use of unpack: get the string and unpack it, oh look the string is japh. While a packed string is line noise, it’s easy to see past it and note what the code is doing if it’s a simple obfu.
Since I have a tag dedicated to version control, I thought to use it to link to iolaus, a git porcelain that emulates darcs. It’s written by the same guy who wrote darcs, possibly as a one-off project. Perhaps it was meant to solve the darcs "exponential merge time" problem. The same problem has been drastically improved in Darcs 2, and iolaus hasn’t been updated since March 2010, so it probably isn’t worth worrying a lot about.
I realized that the semantics of git are actually not nearly so far from those of darcs as I had previously thought. In particular, if we view each commit as describing a patch in its "primitive context" (to use darcs-speak), then there is basically a one-to-one mapping from darcs’ semantics to a git repository. The catch is that it must be a git repository with multiple heads!
Fortunately, this is not such a foreign concept to git. In fact, git has a whole framework to help users manage repositories with multiple heads (see, e.g. checkout and branch). So it’s not so very foreign at all. There are just a couple of major differences how git works. First, in git your working directory will only reflect one of the heads, while in darcs (or iolaus) the working directory reflects the union of all changes in the repository.
Just to make things even more interesting, it’s written in Go.
Seen on LWN, an article about porting an application from C# to Java. Punchline: automated translation. Quote:
The inspiration for this was an article about Boeing and automatic conversion. Well we thought "if Boeing can do it so can we". Sounds stupid? Well it is. Luckily for us we did not think that at the time.
Cute nerd joke: Like, Python.
#!usr/bin/python # My first Like, Python script! yo just print like "hello world" bro
I’ve been spending a bit of time steeping myself in EmacsLisp these last few days. I’ve been looking for information on elisp "best practices" — specifically, is it OK to rely on (require 'cl)?
Here’s one page wondering the same thing. There’s always a ton of interesting stuff whenever you go poking at emacs packages; most surprising to me this time around was ELPA, the Emacs Lisp Package Archive. Perl has CPAN, Python has PyPI, Ruby has Rubygems.
I also found the blog emacs-fu pretty interesting looking — approximately one post a week, I think. Lots of stuff I wish I could absorb better.
Emacs Coding Conventions from the Elisp manual is also pretty helpful. To this point (about CL), it says:
Please don’t require the cl package of Common Lisp extensions at run time. Use of this package is optional, and it is not part of the standard Emacs namespace. If your package loads cl at run time, that could cause name clashes for users who don’t use that package.
However, there is no problem with using the cl package at compile time, with (eval-when-compile (require 'cl)). That’s sufficient for using the macros in the cl package, because the compiler expands them before generating the byte-code.
For me, this is enough, because I want to use dolist. But there are programmers out there like David O’Toole, who writes in his interactive guide to the GNU Emacs CL package:
Despite what people say about still being able to use the macros while complying with the policy, in my opinion the policy is still a discouragement. You have to memorize which of its features you must abstain from using (and therefore lose the benefit of those features) if you are to have any hope of someday contributing Lisp code to GNU Emacs.
I think the GNU Emacs maintainers are hesitant to allow use of a package, like cl, which isn’t "namespaced". I bet if all the functions in cl were prefixed with cl-, nobody would mind…
[Update, 2010-Apr-27: From an email on the magit email list:
There's also the small matter that many of the function implementations in cl, striving for the full generality of Common Lisp (much of which is completely useless in Emacs), turn out to be horrible.
E.g., for a fun time, dig down through
(find-if pred list :from-end t),
and look at what it ACTUALLY does when you finish macroexpanding everything. It tests every element of the list against the predicate, not just the rightmost ones stopping when it finds the first match. Once it determines the rightmost match, it then retains NOT the element itself, but its ordinal position N, which then gets used in (elt list N), meaning ANOTHER listwalk, just to get the element back in order to return it. Nor is the byte-compiler anywhere near smart enough to optimize this away (I'm not sure any compiler would be...)
I'll grant cl has some useful macros in it, but it comes bundled with a lot of crap and you need to be really careful about what you use. For many things, you're better off rolling your own functionality using the standard routines available (e.g., while, mapcar, and reverse are all written directly in C).
And you most definitely do NOT want to be foisting the crap on everybody else, hence the need to keep it out of the runtime.
// be careful with those implicit .toString() calls in == comparison typeof "abc" == "string" // true typeof String("abc") == "string" // true String("abc") == "abc" // true -- same types get casted to equal each other String("abc") instanceof String // false -- hmmm... (new String("abc")) instanceof String // true String("abc") == (new String("abc")) // true -- wait, wtf?
If you find yourself trying to do a polynomial regression in R, you may find Polynomial Regression in R by Bret Larget extremely helpful. I always have a hard time remembering the I(x^2) syntax. The explanations of the underlying statistics are also useful if you already know a little bit of what’s going on.
While r2 has this nice interpretation, its major deficiency is that it will always increase as you add additional variables — the residual sum of squares from a small model must be at least as large as that from a larger model of which it is a special case. So, looking at r2 is not a good strategy for picking out a good model, because you can get increasingly better r2 values by addiing spurious variables. One attempt to correct for this is to compute the adjusted r2 statistic.
A good friend of mine has created a personality test to both let you know what kind of developer you are when it comes to designing software as well as gather data for his PhD. If you have the 10 minutes it takes to do the test, please do take it; turns out HR and secretaries don’t like letting PhD students talk to managers to let them give a short online test to their developers.
I’m an improviser (all results are here).
An article on Lambda the Ultimate about literate programming. I didn’t read all of these papers — only the "Programming on a Team Project" — and have never really done any literate programming myself, but it’s an interesting methodology and I sometimes wish it caught on better.
In my experience, healthy projects either have very little in the way of comments, beyond architectural descriptions, or have lots and lots of comments, sometimes one per each line of code (mature projects tend towards this end). I think XP is probably right to suggest that energy spent commenting is better spent refactoring or improving the codebase.
And yet LP still has a compelling power (at least for me)! My feeling is that there are some applications which benefit a lot from a literate style — namely research papers, data analyses, and tutorials. But for everything else I think it’s probably better relegated to the museum of history.