Quote: Original post by MaulingMonkey
With C, you at least have a legitimate complaint in the symbols having multiple purposes (* being dereferencing, declaring pointers, and multiplication -- & being creating a pointer to, declaring references, bitwise and, part of logical and.) to the point that keeping them straight requires slightly less than casual knowledge of C
I used C daily for a few years, and more recently C++, and I still double check when I see:
int *a[3]; (int (*a)[3] or int *(a[3])?)
*p++; ((*p)++ or *(p++)?)
and have to think a second about, say:
int *&r (inside to outside, so reference to pointer)
I even sometimes do a double take when I see:
Buffalo(int buffalo) : buffalo(buffalo) {}
Anyway, I'd object to @ for types because it doesn't tell me anything useful. This is also why I don't like the convention of starting all class names in C++ with the letter 'C'. I think a good middle-ground is enforced naming conventions like "types always start with a capital letter". It gives the sort of visual reference that @ would without adding to the noise.
On the other hand, Ruby's use of @ tells me useful information that I can't learn from the immediate context: the scope of a variable. It's like the m_, g_, etc. prefixes that are somewhat popular in C++.
Like you say, languages need syntax, but that doesn't imply that all syntax in a language is needed or even useful.
Quote: Original post by Talroth
Other than people not lining up their braces on the same column, or failing to intend properly
I don't line up my braces because I don't like giving the opening brace its own line. I prefer to let the whitespace do the talking and tuck away that brace someplace unobtrusive. The closing brace gets its own line, but that's more because it's more intimately connected with the statement that started the block than the last statement in the block.