Saturday, December 25, 2010

Infinite Recursion in Parser Generators

Well, I've stuck my foot in it again and doing something which doesn't make a lot of sense.

Skipping the details, I decided I need to write a parser in PHP and the language I'm designing is embedded in PHP - which has a complex syntax and . . . anyway, one thing led to another and I've ended up writing kind of a parser generator in PHP.

It's not really a parser generator - it's more like a programmable parser where the program is a grammar specification.

So, I broke out the Dragon book and started reading, built a programmable recursive descent parser framework object and a hand coded parser for language grammars so I can can program it and a programmable lexical scanner - all in PHP and it all works pretty well.

And then . . .

I couldn't solve my problem with it.

Why, you ask?

Well, the problem I have cannot be solved by a parse tree created from a right-recursive grammar - which is what the book says a recursive descent parser needs to process.

Why?

Because when a recursive descent parser hits a left recursive production (which is what I need for my problem) it goes into an infinitely deep recursion.

Why does it do that?

It's stupid.

It turns out that no only will simple productions like: a : a TERMINAL ; create infinite recursions, but various, well hidden, mutual recursions will as well.

So - having faith in the Book - I decided maybe I need something which handles left recursive grammars. So I read and read and thought and thought and - as usually happens - I got tired, went to bed, and woke up this morning with a realization:

"It's not the recursion dummy, it's because processing non-terminals don't eat tokens!!!!!"

If that doesn't mean much to you - that's OK. The rest of this post is a boring explanation of what's happening and how to fix it.

First of all - why isn't it obvious from the book? Because it's not in there because:
  1. The book defines a mathematical formalism to describe language structure and parsing
  2. Like good mathematicians, they then ignore the actual problem and get buried in the formalism. And then . . .
  3. They come up with ways to solve problems in the formalism using programming techniques and computer constraints available at the time they are working
  4. The 2nd, 3rd, etc generation of 'students' become teachers and so they just teach the formalism in the computing context of the time of the original work
My dragon book is copyright 1977. Torben Ægidius Mogensen's "Basics of Compiler Design" is copyright 2000 through 2010 [nicely written, by the way] and the syntax analysis is a rehash of the stuff in the Dragon book [to be fair, I didn't read it all, but this is true to the margin of error inherent with a quick skim]

Believe it or not, things have changed.

The Apple 2 computer didn't exist in 1977 (I don't think it did. I got mine in 1979 or 1980) and it maxed out a a whopping 64 Kilobytes of RAM [that's 1024 bytes]. The processor executed one instruction about every couple of microseconds. In other words, both memory and speed were very very limited, so a lot of work went into algorithm design - at the expense of clarity and simplicity of code.

As a result, the compiler generators tend to avoid recursion ["function calls are expensive and take a lot of RAM"], but rather tended towards memory and speed efficient algorithms. As a result, the compiler generator section of the Dragon book is heavy into table driven parsers using conventional, non-recursive, non-functional programming techniques.

And - finally getting to the point - they are so deep into formalism and computing environment, they never actually get to the point of "what causes infinite recursion in parsers".

Well, here's the answer: any algorithm which revisits the same non-terminal without consuming a terminal symbol will infinitely recurse.

Huh?

This highlights another problem in understanding compiler generation: the compiler-eze terminology stinks. It emphasizes the algorithms, not the problem we're trying to solve.

So, here's what the Parsing Problem is:

Given a string of characters, does it make sense in a specific, grammatical language?

OK - that's not specific enough to answer. So let's make it more concrete:

First we will define a bunch of things we will call words and symbols. A word will be a string of 'letters' without any spaces in them. In English we also allow hyphens, so 'cat-food' could be classified as a word. In PHP a word might be a reserved word - 'foreach' or 'if' - or something like that. Anyway, we decide how to find these things in a string of characters.

We're going to call the things we find 'tokens' and it's the job of the 'lexical analyzer' to eat the string of characters and spit out an ordered list of 'tokens'.

These tokens are what the Language Grammarians call 'terminals' or 'terminal symbols'.

I'd rather call them 'tokens' or 'words' because that puts the focus back on what they are in the language. The term 'terminal' puts the focus on the activity of the parser - which we haven't gotten to yet.

Now, you might try to build a grammar description using only 'tokens', but it would get pretty large pretty fast and it would be really limited.

So you need something else. You need things which represent parts of the language. For example, you might need something called a 'sentence' [starts with a capitalized word and ends in a terminating punctuation mark: . or ! or ?] and maybe a 'paragraph' and maybe . . . well you get the idea.

These things which represent parts of the language can be composed of tokens or other parts of languages. In fact, in order to be really useful, these parts need to be able to refer to themselves as part of their definition - that is 'be recursively defined'.

For example, lets say I have only four words: A, B, C, and D. I also have a couple of symbols, say AND and OR. That's my whole vocabulary.

Now let's say I want to construct sentences. I might say something like:
sentence : A AND B | A OR B | C | D ;

where I'm using ':' to mean 'defined as' and '|' as 'or it might be' and ';' for 'that's all folks'.

But this is kind of limiting. So let's say I want to build more sentences than I can list using only words and symbols.

word : A | B | C | D;
sentence : sentence AND sentence | sentence OR sentence | word ;

In compiler-eze, these parts of sentences are called 'non-terminals' - again, putting the emphasis on the process of parsing [the parser can't stop on a non-terminal] rather than on the structure of the language. I'm going to call them 'fragments'.

Now, there are two ways I can use a grammar:
  1. I can build sentences using it - which you do all the time: writing, speaking, creating programs, etc.
  2. I can transform strings of characters (or sounds) into sentences so I can understand them - this is called 'parsing'
Before we get to parsing, let's look at how we can use the grammar to create a sentence.

Let's say I want to build a sentence - but I really don't care what it means, only that's it's grammatically correct.

I'll start with the fragment sentence. But this doesn't get me a string of characters. Grammars can only build sequences of 'fragments' and 'tokens'. Tokens are made up of sequences of characters - which is what I want - but 'fragments' aren't: they are made up of 'fragments' and 'tokens'.

So, in order to build a character string - or say something in the language - I have to get rid of all the 'fragments' so that I have a string of 'tokens' which I can (at least theoretically) feed to the un-lexical un-scanner which will produce a string of characters - which I can then print in a book.

So how do I proceed? (the arrow (->) means 'replace a 'fragment' on the left with one of the alternatives of the right side of the definition of the 'fragment' in the sequence and write it on the right side of the arrow.) (which is easier to do than say)

sentence -> sentence AND sentence -> A AND sentence -> A AND D

and now I'm done. I have 'produced' a sequence of 'tokens' [TERMINALS in compiler-eze]
which I can un-lexical analyze to produce a sequence of characters.

Now in compiler-eze, the alternatives on the right side of the definition of 'sentence' are called 'productions', because replacing a 'fragment' by one of them 'produces' something which is grammatically correct.


Ok - this is pretty straight forward, if boring. So let's turn to the 'parsing'. That is, given a string of characters, is it a grammatically correct sentence?

The mathematicians would say 'it's grammatically correct if (and only if) there is a sequence of replacement operations I can find using productions which will generate the sentence'. So - as they would have it - they have 'reduced' the problem of 'parsing' to finding a sequence of productions which will produce the sentence.

How do we do that? The Dragon book starts by analyzing algorithms, but let's take a different approach: let's look at what we do when 'parsing' a sentence somebody says or that we've just read.

What I think you do (or we do) is look over the sentence and divide it up into chunks which make some sort of sense. Like 'Joe ran through the forrest'. Well, what's this about? 'Joe'
What did he do? 'ran' Where did he do it? 'through the forrest'. Stuff like that.

Let's formalize this procedure:

First we'll lexically analyze the sentence: for 'Joe ...' this amounts to classifying each word according to its possible uses:
  1. 'Joe' - is a noun and a name. It can be used in a subject or the object of a phrase
  2. 'ran' - is a verb. It can be used as a 'verb', as part of a predicate, part of a compound verb, or a phase ['seen to run']
  3. etc
Then we start parsing by examining the first token: Joe. Some sentences start with a noun, so we put 'Joe' on the shelf and look at the next word to see if it fits with how sentences which start with nouns are constructed. etc.

The point is, we are scanning from left to right and trying sentence forms to see if they fit
the noise we just heard or read. [left to right, right to left, up to down - doesn't matter so much as the fact that it's really focusing on one word at a time in a consistent order].

So, in parsing we have two scans going on:
  1. we are scanning the token stream
  2. we are also scanning across a production to see if these tokens fit into it
The 'parse' terminates when the token stream is exhausted and all the tokens have been stuffed into 'fragments' OR something won't fit into any fragment. This is controlled by the sequence of scans across productions. Each time we start scanning, we start with some 'fragment' definition and exhaustively try all of it's productions to find a fit with the token stream - remember that we are scanning the stream left to right. So the only way to get into an infinite recursion is to find a production scan which does not terminate.

Scanning a production terminates on one of three ways:
  1. a segment of the token stream matches the entire production - then the production is accepted. Accepting means that we don't have to look at those tokens any more and we can make a record of the fragment we recognized. [in compiler-eze we then 'reduce' by replacing the production by it's non-terminal in the non-terminal definition (again, emphasis on algorithm rather than process]
  2. a token doesn't fit, in which case the production is rejected.
  3. the production can be empty - and so it's trivially satisfied. [I forgot this early and have to think some more about it. Golly! that's meat for another post on this topic]
So if - in our scan across the production - we never look at any 'tokens', we will never terminate the scan. How can this happen?

Here's an artificial example:

frag1 : frag2 | WORD1 ;
frag2 : frag1 | WORD2 ;

No matter what I scan, my production scan will first look for frag2 which will look for frag1 which will look for frag2 which will . . . and I will never examine a token, so I will never reach the end of the token stream.

To go to a less artificial example, let's go back to my A, B, C, D language.

I'm given with a sentence A AND C and I want to see if it can be produced by the grammar. I decide to 'run the grammar backward' to see if I can find a sequence of substitutions which work.

OK, I start by guessing it's a 'sentence', so I write down:

sentence

Now I say - 'what production might this be? Let's try the first one!', so I grab:

sentence AND sentence

Now you can look at the whole sentence and say 'Yep!!! It fits', but the computer will only look at what it's programmed to do. So, lets say that I've programmed up a recursive descent parser, which works by defining a function for each 'fragment' which it calls when it sees it's name in a production.

So my 'parser' will see 'sentence' and call the 'sentence' function which will then look at the first production and will see sentence and will call the 'sentence' function and . . .

And there you are - infinite recursion.

So we can't use a recursive descent parer. Right? Well, . . .

The recursion isn't caused by the parsing method, it's caused by any algorithm which attempts to match the same 'fragment' twice without recognizing and moving past a 'token'.

So infinite recursion in parsing results from designing an algorithm (any algorithm) which can cycle through a sequence of 'fragments' without ever recognizing (and using) a 'token'.

So, can I patch up a 'recursive descent parser' so that it handles 'right recursion' and other forms of infinite recursion?

Sure. I just have to keep track of my progress through the token stream and reject any production in which a 'fragment' occurs which I'm in the (recursive) process of examining AND which is at the same place in the token stream as it was before. Again, this will be easier to code than to write.

I'll post a note when I've finished fixing this thing - in case you want to look at the code

Mike



Monday, August 2, 2010

Change - again

Just about everyone I know would rather die than change something they believe.

That's too vague.

Let's say I believe I'm too fat. That can make sense if I look in the mirror and see somebody who looks like a sphere. But for an anorexic, when they look in the mirror they see somebody who looks like a stick.

The reason we say they are 'anorexic' isn't because they look like a stick. It's because they look like a stick and think that they are too fat AND they won't change what they believe.

So how do we react to this?

We call something like this a 'disease' and look for something to do to them to make them change. Probably some chemical we can put in a pill or an injection or a patch or a suppository.

Does it really make sense that an inert chemical can cause someone to have a specific idea? Isn't an 'idea' or a 'belief' more complex and specific than a single chemical?

So what can these 'drugs' really do? - other than slow down or speed up thinking?

If that's all they can do, then 'drugging' people just changes their ability to think - their 'thinking environment' - not their beliefs.

So 'drugs' can't 'cure their disease', although they may make it possible for them to think about it differently. Maybe they it makes them think more ssssllloooowwwwwlllllyyyyy. Maybe it makes them stop thinking at all. Or maybe it just makes them passive so we don't have to think about them at all. Or maybe - as my friend who knows these things says - they generally don't work.

But that's not the point.

The point is: if an anorexic didn't believe he/she was fat, she wouldn't be an anorexic. She'd be a skinny person who knew she was too skinny and would do something about it - like eat some more.

So how do you change a belief?

Take football for example. The team which wins consistently believes that they can win. Not only that, they believe they can win this game. Right now. If they think they can't, they always lose.

What makes them believe this?

It's pretty simple: they have a slogan, a mantra, a rallying cry, a whatever to repeat over and over again. So as long as they can keep telling to themselves they can win, they will win, they're going to win - then they believe they can, will and are going to win.

Is a belief anything more than something we keep repeating to ourselves?

What happens when we stop talking to ourselves about one specific belief? Doesn't the alcoholic or a smoker keep reminding himself that he needs a drink or a cigarette? What would happen if he - instead - reminded himself that he needs an ice cream cone? (Besides getting fat and maybe getting diabetes) Wouldn't he eventually go from being an alcoholic to an ice cream-aholic?

A belief is just a thought. It's not made out of stone or steel or even jello. It's 'mind stuff'. There's two kinds of 'mind stuff'. There are memories and there's 'what I'm thinking now'.
All you can do with a memory is either lose it or drag it out to 'think about it now'. Everything you do and experience is the 'what I'm thinking now' stuff. That's where the anorexic and the alcoholic and the smoker 'belief' exists.

There isn't any automated thought loader which pushes thoughts into your 'thinker' and makes you think them. You get to pick and choose.

Don't believe it? Close your eyes and try to count the thoughts which come up over the next 10 seconds.

If you're like I am, there were a lot of them. Ten, a hundred, I don't know. Just lots and lots of them. I'll bet you 'thought about' just a couple - maybe one or two. What happened to the rest of them? They're like the kids you didn't pick to be on your team: they just wandered off.

The stuff you and I believe - about life, goodness, and - especially - ourselves - are just these familiar little thoughts we keep repeating. And by repeating them, we think their real. And that's all a belief is.

So really, how hard is it to change a belief?

It's easy - if you want to and are brave enough to give it up.

Sunday, June 20, 2010

Human Development - kind of

So somehow I went from wherever I was as a young guy to this worn out husk who had to deal with with raising six kids. Was nothing I planned to do, but, well, there it is.

That's not the point - it's just to give you my credentials, assert my authority, bolster my claims, etc etc.

This first point is that I'm writing from my experience - not text book theory or some other academic point of view [I have those also, but not about kids]

Here's the Second Point:

When a kid is just born, they don't do a lot and the way they interact is really different from how they are later on. They're kind of like little animals that just crawl or run around. They don't really argue about anything. Their complaints are very personal and immediate - hungry, wet, cold, lonesome, tired.

After a couple of years something happens. You've been going along saying things like: Please don't do that or Please do this - and almost getting used to the fact that they never really pay much attention and don't seem to remember what you said from one second to the next. But then the thing happens: the kid turns around and looks you straight in the eye and says - very firmly and usually pretty loudly - "NO"!

This is a major breakthrough.

The kid hasn't turned into a monster or become a 'Terrible Two' (as I was told by my mother, over and over again). The kid just now finally noticed that you're trying to get him (or her) to do something that wasn't what he wanted to do.

That's the point when you can finally start teaching the kid how to 'not get run over by a car' and 'to clean your plate' and 'to pick up your toys'. It won't work - at least not for the first 10 or 20 years - but you can now get started and be (relatively) happy with the knowledge that you're not just being ignored. You're being Actively ignored and - believe it or not - all of you talking, pleading, reasoning, and (most of all) the example you set is sinking into that little mind.

After mulling this over for a bunch of years - I've come to the conclusion that this is a pivotal point in all of our developments. It's the point where we realize that we live in a context over which we do not have total control and with which we must learn to interact.

And for a Third Point:

Lora (we're married and four of the kids are her fault) is a certified Montessori method pre-primary instructor. She went to school for a year, did a year internship, read all of Montessori's books and practiced on our kids. Turns out that's enough for her - she prefers ponies and dogs to humans.

Anyway, she has creds as well.

Montessori discovered - by observing kids - that kids go through various phases of development where they are really different. She called them sensitive periods and it's fascinating stuff.

But - to get to the point - there's one thing I wanted to write about here. It's the thing which happens when the fish in the classroom fish tank dies.

The pre-primary Montessori classes have kids in the 3 to 5/6 year range. They generally have a fish tank. At some time during the year one or more of the fish will die.

Here's where it gets interesting.

The kids come in and quickly divide into two groups.

The younger kids look at the fish and say 'the fish is dead' and then go do something.

The older kids look at the fish and say 'the fish is dead. I wonder why the fish died. Was the water too hot? Was it too cold? Did another fish kill it? Wasn't the food right? I think that ...'

You get the difference?

Before something changes, it's ok to just see the dead fish and register it as a fact.

After 'the change' we have to 'know Why'.

I think this is the origin of Why.

You should listen to some of the silly 'reasons' 5 and 6 year old kids come up with. Their logic is not all that bad, but the 'facts' they start from are pretty lame. But that makes sense because the don't have much experience.

So do that for a while - - - and then - if you're brave - listen to some of your friends and neighbors talking about stuff. Things like 'the problem with ... is ... because ...'

When I started listening to adults and comparing it with the stuff 5 and 6 year olds come up with, I got really upset. It's the same stuff!!!

I think everybody is acting like 6 year olds - including the guys who run the countries and big companies.

Doesn't that explain a lot of what's going on?

I'm telling you - the problem is the whole thing is being run by 6 year olds and that's why nothing works. The answer is . . .

ORM or Not? Part Two - Definitely Not

Maybe that's a little too harsh - but I don't think so.

Here's the background:

One of the biggest recurring programming problems I've had to deal with in site and application development has been - (drum roll) - developing the database. Not populating it - that's just boring. Developing and refining the data definitions.

The "Conventional Wisdom" promulgated by Software Gurus is basically this:
  1. Databases are Good With Data
  2. Data Objects Are Good With Data
  3. A Programmer Must Define a Mapping Between these Two Good Things and then All Will Be Good
Not my experience.

Software Gurus don't write application. Software Gurus write Books and (occasionally) toy examples. So they really don't understand that Good does not generalize because the Good that Databases are with data is all about safety and accessing it in great and small piles. The Good that Data Objects are about is manipulation, fine structure, and flexibility.

My experience with Object Relational Data Mapping - which is just a fancy phrase for how you get the data from the database into an object and back - is this: when I Did it Their Way, I had to hand-coordinate both my Objects and my Database.

So when I wanted to add a field or change a field or change a datatype, I had to do everything twice.

Now Rails has Active Record - which is a stupid Software Guru name for an ORM which creates the Data Objects automatically (my drivers license is an active record - meaning it's current so I can legally drive), but that doesn't 'solve the problem' it just moves it. [maybe Rails 3 is smarter,
I dropped out just as Rails 2 was coming out]

Specifically, the Rails solution moved the problem to 'migrations' which turned out to be fragile. I know: I tried it.

So, a couple of years ago I proposed Not building an ORM, but rather loosening the coupling between the Database and the Data Objects.

Here's what I've done:
  1. Defined an PHP class which implements a data abstraction which includes most of the types of data and methods I want in my CMS [see http://www.yasitekit.org] and knows how to map those data types into database fields. It also knows how to create the database structures, create, update, delete, and retrieve those values.
  2. Defined a PHP class which makes it easy (relatively speaking) to create objects which are instances of the data abstraction class. These classes come with a bunch of methods which I've found useful as well as a management class which automatically provides interactive data administration.
  3. Some automated analysis tools which work with this stuff so it's possible - and relatively painless and safe - to add, delete, and modify the derived objects (point 2) and reload the database. All done by hacking the derived PHP objects, but never ever touching SQL.
  4. As a side benefit, I threw together a database adaptor which abstracts the 8 essential database operations [create/destroy database;create/drop table;insert,update,delete,select] at a level high enough that we never have to muck with SQL. This makes it possible to augment it with non-SQL databases - such as Mongodb. In it's present form it allows painless migration between sqlite, mysql, and postgresql (it currently handles 5 different PHP database interfaces - automatically figuring out what's available - yada yada yada)
So, at this point, I feel more than comfortable saying No to ORM's.

BTW - if you go to the YASiteKit site, you may be a little dissapointed because all of this good stuff is not available yet. But It Will Be - Real Soon Now (no kidding). And a whole lot more.

What is on the site is a pretty-much full set of documentation - both about the system design and about how each of the parts work [I write doc as I go, so it's not all pretty - but it's useful (I know because I use it)].

So take a look. If you have any comments - let me know.

Also, if you like it enough to want to help - let me know sooner - I'm almost ready to turn it loose to see if it gets any traction.

Tuesday, June 1, 2010

Choices [Rant Warning]

I've been thinking a lot about the idea of Choice lately. Not so much any particular choice, but the whole idea itself and how it differentiates us from rocks and water and stuff like that.

It seems to me that one of the distinguishing features of 'life' verses 'non-life' things is that 'living' things get to choose what they do rather than just follow the 'laws of nature'. For example, rocks don't do much of anything except just sit where they are or roll down hill or get knocked someplace else by something else, but a bug can get up and go someplace else - even against gravity, if it wants to.

So what? Well, I've been wondering for a long time why we (that is us 'humans') do such stupid things and - even when we realize they are stupid and we are hurting ourselves - keep doing them. We even go so far as to tell ourselves - and everyone who will listen - how hard it is for us to not do them.

Take eating too much. When I eat too much for a while, I get fat and crabby. Then I feel sorry for myself because (I claim) it's oh, so hard to not eat too much. It sounds like there is an army of bad guys stuffing food into my mouth - but what's really happening is that I'm choosing to pig out. It takes real effort to keep eating when I'm full or when I'm not really hungry.

That one's simple and easy to see. And it's easy to see that if I'm fat, I made myself that way and I did it because I wanted to eat more than I wanted to feel good.

But it goes a lot further than just eating.

I think we've made up all kinds of religious garbage to excuse our disgusting behavior. We make up and believe in gods and devils and genetic forces and team loyalty and goals and success and all kinds of other junk to take the heat off of ourselves. 'The Devil tempted me - and I was weak',
'It's in my Genes - I can't help myself', 'We've got to do it for the Team', ' .. or the Company', '... or our Kids', '... or for Honor', '... or Because That's the Kind of Man/Woman I Am'. etc

It's all a smoke screen.

We do it because that's what we choose to do - just like those guys who strap bombs on themselves and kill a bunch of people.

It's a personal, misguided decision.

The hell of it is, it isn't hard to Stop. It doesn't take Effort to not do something. There really isn't any force making us do this stuff. Want proof? Just anethesize your mind somehow - and you won't be doing all that stuff you think is so hard to not do.

So let's get over it and stop kidding ourselves.

We muck things up, piss each other off, get mad about stuff, drill oil wells places where we can't plug leaks, kill people, destroy the whole bloody planet and everything on it because we choose to.

That's all there is to it.

And that's what makes us different from a nice, peaceful rock.

Thursday, May 6, 2010

Understanding

To Understand, to Have Understanding, to Know

What does it mean when we say "I understand"?

Like everybody else - or at least I think everybody else - I used to take it for granted. I thought that I knew what it meant.

Now that I'm getting to know more about what goes on in whatever it is I call my mind, I'm pretty sure I was wrong.

Here's what I've figured out so far.

Thinking

First of all, we seem to think that we think.

Some of us think we think "rationally", but most of us have no idea what that actually means.

It seems to me that we - as a species - have been working at understanding what 'thinking' is for a very long time. At least since the Greek philosophers and the Chinese sages. Probably longer than that - at least 2,500 years or more.

Somehow we've latched on to 'logic' as 'real thinking' and everything else as some sort of minor annoyance.

I don't agree that 'logic' and 'rational thinking' are the real kings and queens of all 'mental processes' and that the rest of whatever goes on in our minds is at most second rate.

And here's why:

Logic - a Necessary Digression

We've studied the hell out of Logic. We've formalized it and we know what it is. Well, those of us who've taken the trouble to study it a little know what it is.

First, what it isn't.

Logic is not sticking 'because' in the middle of sentences. It's not making up 'reasons' for things. And it's not asking and then answering 'why'?

Logic is a rigorous, formal method of evaluating the Truth or Falseness of sentences which are constructed according to specific rules.

First of all, what is a sentence?

In 'Logic' it's a string of symbols consisting of 'propositions' and 'logical connectives'. The 'propositions' are just blobs or words that can have any structure and may or may not mean anything. For the logician, the only thing that matters is that they are either True or False. In fact, for the logician, it doesn't matter at all what True and False mean - only that they are different and only that there are only two possibilities.

The 'logical connectives' are special words which can be used to 'connect' two propositions or sentences. When stuck in between two of these things, the three of them form a new 'sentence'. That 'sentence' is either True or False and the value strictly depends on (1) the values of the two things on each side of the connective and (2) the rules of the connective.

Let's get formal:
  • P, Q, and R are all propositions or sentences - which means they have truth values.
  • * AND, OR, and IMPLIES are connectives
  • Let's add in NOT, which isn't a connective, but it's useful. It's 'logical negation' - which means that if P is True, thenNOT P is False - and vice versa.
So we build sentences by writing P AND Q, Q OR R, P IMPLIES R and things like that. We evaluate these sentences according to the rules:
  • if P and Q are both True, then P AND Q is True, otherwise it's False
  • if P and Q are both False, then P OR Q is False, otherwise it's True
  • if P is True and Q is True or if P is False, then P IMPLIES Q is True, otherwise it's False
  • if P is True, then NOT P is False and if P is False, then NOT P is True.
If we add in some Parentheses, we can get really wild and use sentences for propositions and write things like:

((P AND Q) OR ((NOT R) OR S)) IMPLIES Z

We could also write the same thing without the parentheses, but we wouldn't know what it means without using some special rules. Why? Because we wouldn't know if
P AND Q OR NOT R OR S IMPLIES Z
means (P AND (Q OR (NOT R)) ) OR (S IMPLIES Z) or the thing I wrote above.

But that's for really studying 'Propositional Logic' - which you can find in a book. For now, it's not really important.

The important thing is this:

The Truth of the sentence depends completely on the Truth of the basic propositions - that is, the propositions which don't contain any 'logical connectives'. These are the 'Axioms'. It's what you start from.

Everything you write using Logic can be traced back to and completely depends on the Axioms - the Propositions you write down and start from.

This means that Logic only transforms the shape of the original propositions. It can't add anything new.

Working with Logic is like walking around a house and looking at it from different sides. No matter what you do, it's still the same house. Nothing is ever added or subtracted from what it was to start with.

There's more Logic - for example, First Order logic adds 'quantifiers' to the Propositional Logic we've just outline. It adds exactly two: the Universal Quantifier and the Existential Quantifier.

This lets us write richer sentences because we can then write things like: All P is True and At Least One P is True.

Having quantifiers makes is easy to tell when someone is saying something stupid. For example, if somebody says "All cats are mean", you can tell they don't know what they are talking about if you've ever met a cat that wasn't. So you object, and then they say you're being picky and that they really meant "most cats are mean". That is supposed to make it better, but it really
means that they don't really know, but want to think of it that way. So, it's better to keep your mouth shut and just know that they say stupid things.

Also, it helps tell what you can know for sure. For example, when somebody says "you can't do that", but you think it would be a good idea to "do that", all you need to do to prove that
it's possible to "do that" is to find one example where it worked. Then you can ignore them. This is a great help when starting a business, because most everyone will tell you it "won't work", but if you can find an example of when "it worked", you can ignore them because you know it's possible.

There are a lot of things like that.

So Why Logic?

Well, if I want to do something AND I can figure out enough things which are True, then I can use Logic to figure out if the thing I want to do will work.

Or, I can take the thing I want to do and I can see if it's Logically Consistent with a bunch of other things that I have to do or something like that.

Or maybe the thing I want to do has to have something which I can't do. If I can figure that out, then I can avoid trying to do something which won't work.

Logic can keep us out of trouble. It can help us predict if something will work or if it won't.

Knowing things like that saves a lot of time, money, anguish, and other things we want to save. It helps us be successful - whatever that means to each of us.

Logic is reliable.

Why is logic so reliable?

Not magic: Logic is a bunch of rules for figuring out if things will work based on thousands of years of experience.

So Logic is the Answer!

Right?

Wrong!

Propositions

Where do the Propositions come from?

Remember, the Logic just transforms our propositions - our axioms - our guesses of what is right or wrong.

If our Propositions are crap - then all the Logical Reasoning in the world won't make them smell good. It will only be looking at the crap from different points of view.

Crap is still Crap.

One place our Propositions don't come from is 'reason' or 'logic'. We get them by following some other rules.

Scientific Propositions

One set of rules are the ones used in Science: a fact is an independently, verifiable experimental result.

Let's take that apart:
  • a result is something which can be measured. This means that there is a mechanical method which transforms something which happens into a number. The mechanical method must be reliable. For example, the diameter of an object. If it's a hard sphere, then that's easy to do; if it's a rectangular solid, then we need more rules (for example, measure each width on a line normal to each face and compute the average of the three measurements); if it's a bag of gas, then we're screwed because we can't reliably measure the diameter.
  • an experimental result is a _result_ measured from an activity which can be described precisely. The precision must be sufficient to compute the accuracy with which measurements
  • a verifiable experiment is one which can be performed again. In order to do this, the activity of the experiment must be completely described in enough detail that the activity can be performed with the same precision.
  • an independent experiment is an experiment performed by a different experimenter, using different equipment at a different time and place from the original experiment. This depends on the precision and completeness of the description and removes any bias the experimenter, location, and time of the first experiment may have introduced into the results.
This is pretty restrictive. It's also pretty slow and pretty expensive. It's also pretty good at building reliable knowledge.

So that's one way of getting propositions. We design experiments and do them to see what happens. We verify them to make sure that we know what to expect. Then we try to figure out rules which describe what we've observed and then test them using Logic to look at the results from different angles - so to speak.

How else can we come up with a Proposition?

Guesses

How about we just guess?

Guessing is good. We do it all the time. Most of the time it doesn't work though.

But often, it's all we have.

Say it's election time and you decide to vote. Who are you going to vote for? You know both of them want to get elected and that both of them will say about anything they think you want to hear. In other words, most of what they say are lies. So you guess. You say "I think this one is more likely to do what I want" and then you pull the vote lever.

Now if you were being "logical", you'd do something else because you know both of them are lying. So maybe you wouldn't waste your time voting. Maybe you'd logic yourself into making a lot of money so you could just bribe whoever won. That would be more 'rational', if you want to get a politician to do what you want.

Emotional Propositions

How about we just claim something we want to be trueactually is?

As far as 'Logic' and 'Reason' goes, this is just fine.Remember, 'Logic' starts after you've got the propositions - it doesn't care where you found them. And inasmuch as 'Logic' is formalized 'Reason', well, the same can be said for 'Reason' as well.

When we do that and use 'Logic', we call that 'Rationalizing'.

We're 'just making up reasons for what we want to do'.

This is how a lot of real disasters are created.

Take starting a war.

Does it really make sense to rip everything apart, destroy somebody else's hard work, their lives, etc etc?

Sure - if you're willing to cause that much pain and you want their stuff.

Or maybe you think you need to think they're evil and that you're so right that you have to stamp out the evil.

But enough of this

Back to Thinking

I think you'll agree that - looked at this way - rational thought isn't really much 'higher' than any other form of 'thought'.

We're really just kidding ourselves.

So what is all this 'thinking' and 'understanding'.

What I 'think' about 'understanding'

When I'm really honest with myself, I say that I understand something when I have a feeling of comfort and confidence that I know what that 'something' will be like the next time it comes up.

If it's a freight train moving along - I 'understand' that it will stay on the tracks and I can control whether or not it squashes me.

Of if I'm trying to sell somebody something - I 'understand' that if I get the price right and am patient enough and advertise it enough, somebody will come along and buy it.

Sounds pretty good - doesn't it.

What Happens if things happen like I Predict?

When things happen the way I predict - then I think I'm pretty smart, I get more confident, and I 'lean on' my 'understanding' even more.

Does that mean that I can make accurate predictions?

Experience says: Well, Maybe. It all depends.

What Happens if things don't happen like I expect?

Well, lots of things.

Mostly I used to ask Why?

Then I'd come up with some Propositions to use to build a logical argument explaining how what I understood should have happened, but didn't.

This would usually involve finding somebody to blame. (If they'd only listen to me or do the right thing or weren't so self centered or . . .)

Then I'd can feel comfortable again because I'd 'know why it happened that way'.

In other words, I'd 'understand'.

(I know you would never do anything like this. You understand things a lot better than I do - don't you?)

And So, . . .

It's just one big circle of self delusion.

If you believe this stuff, you probably feel uncomfortable.

So even if you know it's true, you won't feel that you 'understand' it and will want to try.

See the trap?

(c) Copyright 2010 Mike Howard. All Rights Reserved.

Friday, April 23, 2010

Timing Tests

OK - I'm not a timing test expert. I'm not a program profiling expert. etc etc etc

I know timing tests are 'hard to do right'. I know that there are all kinds of consideration. I know that 'to do it right, you have to . . .'

But I don't really care about 'doing it "right"' according to some picky standard.

What I do care about is not writing really slow code.

For that, the rules are simple:

Rule 1: Don't do things that take a long time

Rule 2: If you have to do repeat something a lot, check out alternative ways to do it and pick the method which is both clear to read / understand and takes the least time.

Rule 1 - expanded:

You do this by knowing how long things take and which operations are blocking. You don't need to be accurate, because for most things, 'takes a long time' is measured in orders of magnitude.

Here are the relevant cases:
  1. Monolithic program doing in-memory data access/processing
  2. Multi-threading/parallel processing/whatever - any form of parallel processing which is executed inside a single process context. Here you tend to lose because of blocking and communication - one thread needs to access shared data and so blocks reads, etc OR one thread needs results from another to continue OR etc.
  3. Self generating code - aka Metaprogramming. This is a cool way to impress your friends, but it costs orders of magnitude in performance. The idea is to write code which traps function calls to functions which don't exist, then parse the function name and build a function 'on the fly' to do the task encoded in the function name, dynamically build the call sequence, execute the function and return the result. It's not hard to do, but it's pretty much unnecessary (almost all the time) and really slows things down, because parsing strings takes a lot of repetitive work. That's why we have compilers!
  4. Disk reads are much longer - at a minimum they require a context switch as you make a system call. Then it depends on file size, caching in the operating system, memory size, etc. The Rule is: for repeated reads, try to do only Once and cache the result in a variable
  5. Run a subprocess - this requires a context switch, process invocation, lots of disk reads, etc etc followed by receiving result, parsing it, etc etc. Much more expensive than disk, but less expensive than Network reads.
  6. Network reads are longest - not only do they require a system call, you typically have to run another process someplace. If that process is on a different host, then the cost is astronomical relative to in-memory and disk i/o.
So Rule 1, says - if you don't really need to do the Slow Thing, then don't. And Do the Slow Thing as seldom as you can get away with.

Rule 2 - expanded

Right now I'm writing a lot of PHP (don't groan, it seemed like a good idea at the time) and in this code I have lots of places where I need to do things based on the value of a string. For example, I'm writing a lot of PHP5 objects where I put guards on attribute access so that I can find spelling errors (my High School English teachers understand why I need to do this).

So I have lots of functions that look like:

function __get($name) {
if (in_array($name, array('foo', 'bar', 'baz'))) {
return $this->$name;
}else {
throw new Exception("$name is not a valid attribute name");
}
}


or
function __get($name) {
if (($name == 'foo' || $name == 'bar' || $name == 'baz'))) {
return $this->$name;
}else {
throw new Exception("$name is not a valid attribute name");
}
}

or
function __get($name) {
switch ($name) {
case 'foo':
case 'bar':
case 'baz':
return $this->$name;
default:
throw new Exception("$name is not a valid attribute name");
}
}
I do this a lot, so I need to know which one is the fastest. I don't need to know precisely, I just need to know 'more or less'.

To do this, I need to build a test case and run it to get some timing numbers.

The test case doesn't have to be perfect, but it does need to put the emphasis on the differences between the three different methods. It also has to be large enough to be able to distinguish run times between the methods.

In this case, I built the three functions each with about 150 alternatives and the built a list of trials which would fail about 1/2 the time. I then executed each function a bunch of times.

How many is the right bunch? I'm lazy, so I start small for the number of repetitions and then crank it up until the total run time per method is around 10 to 60 seconds.

Here's what I got:
  • switch: 49.7731 seconds
  • in_array method: 86.3004 seconds
  • if with complex conditional: 57.0134 seconds
Guess which method I'm going with.

[guess how I'm going to refactor a lot of my code (sigh - I should have tested first)

Monday, April 19, 2010

Self Image, Self Identity and All That

Who am I?

Or, more to the point, what is the 'idea' of myself that I identify with?

Or, even more to the point, how do I 'like' to think about myself?

I put 'like' in quote marks because 'like' doesn't necessarily mean 'enjoy' or 'makes me happy', but here it means 'what I keep coming back to because I believe it's true'.

In other words, the way I 'like' to think about myself might not be very nice - if I'm convinced I don't measure up to my ideals.

Everybody 'thinks' of themselves as something - has an expectation of who and what they are. In other words, Everybody 'likes' to think of themselves in a particular way.

That's the setup. Now, here are some questions:
  • Am I nothing more than an opinion? Or am I real?
  • Can I change 'Who I am' by changing my opinion?
  • Is my 'World View' a result of 'Who I am' or is it something I create for my 'Who I am' to live in?
  • Do I see and hear the world around me OR do I pick and choose what I hear and know?
  • Do I really know 'Who I am'?
  • Do I really know my friends? Or do I make them supporting actors for myself?
  • Can I live without knowing 'Who I am'?
  • How can I avoid living in an Illusion if I continue to 'know' 'Who I am'?
First Hypothetical

Let's suppose that 'Who I am' is an opinion.

Opinions are just ideas that can be changed. They aren't 'facts'.

If I hold one opinion today and another one tomorrow, it's unlikely I will be arrested, burst into flame, or that anything else substantial might happen.

I'll just have a different 'opinion'.

So, with my different opinion, won't the World be different?

Won't my friends become different people?

Won't the boundaries between good and bad and Right and Wrong shift? Won't they have changed just enough so I can make my 'opinion' work - at least as well as my old one did?

All I need do to test this is genuinely change my opinion once and see if this is what happens.

If it works like this, doesn't this mean that 'Who I am' is an opinion? An Illusion? and that I am living in a false world of my own creation?

Do I want to know?

Second Hypothetical

Let's assume 'Who I am' is somehow 'real'. It doesn't matter what this means other than that it is something other than an opinion which can be changed at a whim.

Now one part of my World View ranks everything by how 'good' and how 'bad' it is. There is usually a sliding scale from 'good' to 'bad' with 'saintly' on one end and 'absolute evil' on the other.

Naturally, I will think of myself as more 'good' than 'bad' - no matter how I think about how I live up to my expectations. [for example, if I think of myself as falling far short, then I will still think of myself as 'better' for having noticed this and for admitting it to myself]

So how will this effect my World View? How will I tend to filter and interpret that which I see?

Isn't is natural for me look for the evil and bad - so I am - in contrast - much better 'than average'?

Won't I go out of my way to do so? Won't I respond with much satisfied emotion to my discoveries of the evil in others? Satisfied in my own 'goodness by contrast'?

How can we test this?

Isn't this consistent with both the continuous litany of complaint and criticism - in the press, in entertainment, and in our own, wool gathering minds?

What happens if I see lots of goodness around me? Doesn't that push my 'Who I am' down into the muck of badness - or at least shift me down a little?

If I can't change my 'Who I am', then I will not 'like' myself (and remember 'like' means what I said it means up above). Isn't that hard to tolerate? 'Who do those "goody, goodies" think they are anyway?' Doesn't it seem natural for Cain to kill Abel?

Third Hypothetical

Again, suppose 'Who I am' is an opinion.

Then there must be something which has that opinion.

That something must be able to observe - inasmuch as it has thoughts, the 'opinion' being one of them.

So can this 'something' watch it's opinion and the thoughts it's opinion is thinking? (or maybe the thought's it is thinking for its opinion).

If this is true, then 'Who I am' is an opinion and the 'something' can become aware of this.

How can we test this?

Can we watch our own thoughts? As we think them?

If we can, then this is true and it opens the _possibility_ that the 'Who I am' is an opinion and that it can be changed.

If this is true, then can't psychic trauma be impermanent? And if impermanent, can't it dissipate? And if dissipated, hasn't it been healed?

Further, how is an opinion maintained? It isn't made of wood or metal. It has no substance other than thought. If thought isn't thinked, then is isn't. It's not there. It's gone.

So, if psychic trauma is thought, isn't it impermanent and has to be 'thinked' over and over again in order to be? So isn't not-thinking it the path to it's dissolution?

Is the dwelling on 'the bad things' and 'how sick I am' the cure or the cause of disease and despair?

Fourth Hypothetical

'Who I am' and 'Who You are' are different.

It doesn't matter if they are real or just opinions.

You see the world differently from how I see the world because you must shape your 'world' so it fits your 'Who I am' and I must do the same.

But mine is different from yours, so our 'worlds' are different.

Can I really see 'Who You are'?

Can I do more than guess?

Suppose your 'Who I am' world conflicts with my 'Who I am', from my point of view. Won't I filter and squash what I see and hear to fit my 'Who I am' instead of yours (no matter how honest, just, and polite I think I am)?

So how can I ever see where you don't make my 'Who I am' good and right? And can you see me?

Deep down don't you think you're a little better than me? I know I am a little better than you - or at least a little righter.

Doesn't that prove we can never know each other?

How can we converse?

Aren't we having two meaningless conversations with ourselves while we pretend that the other is there?

Fifth Hypothetical

I find that knowing 'Who I am' leads to an expectation that I will continue to be 'Who I am' and that I interpret and bend everything I see, hear, taste, smell, feel and think so that that will be true. I insist on continuing my existence as I envision it.

Doesn't this mean that I've been living in an illusion?

Can I escape the illusion without giving up this expectation, this prediction of the future?

Realizing this, can I continue to maintain the illusion - knowing it is a lie?

If I give up my expectation that I will continue to be 'Who I am', won't this mean I will change 'Who I am' into something else? And can I tolerate replacing one 'Who I am' with another?

Copyright Mike Howard, 2010. All rights reserved.

Saturday, April 10, 2010

News Flash: PHP Documenters Insane!!!!

The PHP documentation has gone from very useful to hideously obstructive.

The people who are rearranging the doc into little, tiny chunks which are hyperlinked all over the place obviously never write code.

I just spent 10 minutes trying to find the name of an IO Exception so I can use it in some code I'm writing.

Old Doc:

  1. I would go to the index, click on Exceptions and then scroll down the page (or do a find on IO) and there it would be. 10 seconds tops.

New Doc:
  1. Go to the index click on Predefined Exceptions
  2. Click on Exception - find description of Exception Object - info not there
  3. Back Button
  4. Click on Error Exception - find description of Generic ErrorExeption object
  5. Back Button
  6. Click on SPL Exceptions (what the hell is this? - something new?)
  7. Look at Table of contents: 13 Exception Categories - none of which
  8. looks like an IOException
  9. Click on Predefined Exceptions in the See Also -
  10. Back to Previous Useless Page - And Repeat

First they completely screw up the Perl Regular Expression page by chopping it into tiny, obscure chunks and now you destroy the exception documentation.

To the PHP Documentation Project:

PLEASE put it back the way it was.

Or get somebody who actually uses this stuff like a handbook while writing code to fix it

Or shoot somebody.

To Everybody Else:

Maybe the documentation people have stock in a book company and want the reference books to succeed by making the online Doc unusable?

All I can say is that the way they are going is really going to help Rails and Django.

What do you think?

P.S. Please Send a Nasty Note to the Gods of PHP

Thursday, January 28, 2010

Why are our Programming Languages so Bad?

I just took a quick look at Scala and Lua . . . and I don't think either one is a winner, but for different reasons.

The Scala guys seem to have fallen into the arcane syntax trap - with lots of critical information inferred. I think I agree with this blog post from January, 2008: Scala is probably not a readable language.

Scala seems partially motivated by the Java mistake of believing that 'more required words make more readable code'. That just makes the programs bulkier, not clearer. Other parts of the motivation seem to be to try to find a syntax which supports functional programming, OOP, and everything else which might be fun.

On the other hand, Lua doesn't have a rich enough structure. I think I'd rather write in C than something like Lua. Lua isn't OOP, it isn't a functional language, it doesn't even support structures sufficientlyl, let alone objects. I've had a lot experience working in awk - which is nice, but does not have good support for complex data objects, which makes the programs difficult to maintain and . . . this could get boring fast, so forget it. The bottom line is: languages which don't support a decent object model slow me down too much to bother with.

I've been writing a lot of Python lately - after burying myself in a PHP project for about a year and a quarter. I also write a lot of shell script, HTML, CSS, and whatever. I don't write Ruby anymore - but I don't want to get into that now. Before this I wrote a lot of C, Pascal, awk, etc and - believe it or not - about a half a ton of FORTRAN. So I've written a lot of code in some pretty awful languages.

The simple fact is that none of the modern languages are any good. They all suck.

Why is that?

We really know a lot about language design by now - or at least we should.

One of the things which really irritates me is that every damned language uses different syntax for the same semantic concept. I don't think I know of two languages which implement if ... else-if ... else ... the same way. [else-if is spelled 'elif', 'elsif', 'elseif', or 'else if' or doesn't exist].

It is now a fact that programmers work in multiple languages. These stupid, unnecessary syntax inconsistencies make life hell.

I'm starting a list of what should be in a modern language:
  1. It should be syntactically as small and clear. Don't use words where symbols are clear: i.e. don't use 'begin' and 'end' - curly braces work just fine.
  2. No reserved words. We don't need them. Somebody - I think it was Kernighan - pointed out that a decent parser can determine the meaning of a word based on the structure of the sentence, so that 'for' can mean one thing in one context and something else in another.
  3. It has to support objects with (at least) single inheritance. I think Ruby got that mostly right. I think Python got it wrong by supporting multiple inheritance. I like Ruby's idea of Modules because it allows excellent code reuse. It gets us the utility inheritance promises without the head aches.
  4. Scoping has to be correct from the start. Block scoping the way C does it is right. Matz got that wrong in Ruby - I hear they're changing it again in 1.9. Python still doesn't have it right yet, but I think it's getting closer.
  5. Global variables are Evil. Crockford is right: They should not exist.
  6. Variable declarations are a pain - but less painful than tracking down spelling errors which the language accepts. I've lost a lot of time needlessly hunting down misspelled words in PHP that would have been caught by simply requiring variable declarations so the compiler could catch them. So, variable declarations are Good.
  7. String handling is Good. If you don't think it's a necessity - go write some string handling in C for a few years.
  8. Dynamic - aka Duck - Typing is Good - we need it. I wasted too many years living without polymorphic stuff to ever go back [read: writing in C, pass a pointer and figure out what it is inside the function and hope to hell you don't screw up].
  9. Static Typing is Good - we need it too. I've wasted too many years finding bugs which could be caught by a decent type system.
  10. Functions need to be 1st class things. In fact, everything needs to be.
  11. Closures are Good - we need them.
  12. Functional programming is good - We need it.
  13. Imperative Programming - Structured style [like Dykstra told us] is good - we need it too.
  14. Operation overloading is Good - We need it. Everything should be overloadable. I think Ruby got that right as well. Python has been incrementally getting there for years.
  15. Self modifying code can be a good thing - but it's hard to do right and rarely needed. I think it's better to support it directly rather than trying to discourage it. This is in spite of the crap the Ruby community loves to write [they call it meta-programming, but it's not really meta programming - it's automatically generated code] For some reason they think that self-modifying, self-generating code is intrinsically a good thing - but then I used to do a lot of stupid stuff when I was younger too. I think this is a result of lack of experience - especially in maintenance - and a lot of incompetence. It sure makes Rails a mess.
  16. Exceptions are good - We need them. Error handling is always an issue and good, clean exception generation and handling support makes it easy to include it.
  17. Interpreted is Good. It makes writing code much faster. Must have a REPL.
  18. Compiled is Good. We always need speed. Only the stupid say 'speed doesn't matter'. It always matters, but very rarely at the expense of clarity.
  19. Do we need an IDE? I don't think so, but I don't know. I just write in TextMate on my Mac. I used to write in Emacs - and still use vi from time to time. I've tried Eclipse, but just couldn't deal with it. I think this is a non-issue except that the Language should support development without and IDE.
  20. Reflection - aka inspection - aka whatever - is Good. Python has that pretty right. All functions and classes take an optional documentation string which you can print in the interpreter by typing print foo.__doc__. It saves lots and lots of time paging through documentation. There is also a builtin function called dir() which generates a sorted list of all the attributes of it's argument. Ruby has that wrong - I don't know how many times I wrote foo.methods.sort to try to remember the name of a method when hacking Ruby.
  21. Automatic Document Generation is Good - but the current systems stink. They are like a tail wagging a dog: Code is always more important that comments - and all documentation is comments. Why? Comments don't execute, so they always have bugs and eventually diverge from the meaning of the code [See Brooks: Mythical Man Month where he points out that it's not possible to keep to separately maintained files in sync] Documentation needs to be unobtrusive, compact, and easy. I have no idea how to do this - yet.
  22. To be Continued
Any Things to add to the list?

Tuesday, January 12, 2010

Apple Non-Useability. Is Apple Copying Microsoft?

I don't get it

Apple folks almost invented Usability testing and analysis. Tog, Nielsen, whoever else.

I just upgraded to Snow Leopard. It runs faster but . . .

- Preview defaults to pdf displays that are Too Big. I wasted 1/2 hour fiddling around finding controls. AND there's not little box that says what size the image is. Is that too much to ask?

- iTunes doesn't seem to be searchable. Looks like Apple folks are so enamored with graphics that they've forgotten that some of us might want to find what we want - Not what the Staff likes most or the newest or most popular. The browser thing is hidden in the View dropdown under 'Show Column Broser'.

I'm not too crazy about iCal 'improvements', but that went south when I switched from Tiger. I don't use Mail - Thunderbird instead. Mail kept hiding my e-mail someplace that I couldn't find.

Same thing with iPhoto - I use Adobe because it doesn't force me to use Apple's 'libraries' - when there is a perfectly good file system.

There's more, but that's more than enough carping for now.

Apple is getting closer to being the New Microsoft.