Back to where we came from

I’ve had a small, but noticeable problem with my home audio interface.

Yes, I’m well aware that I often seem to have small problems just begging for a resolution, but neglected nonetheless for some period of time until… a blog post is finally written.

Anyhow, this particular problem stems from the dynamic tables that are oh-so-cool in giving us super-long tables that are filled dynamically by the backend.  It’s somewhat of a good problem to have, in fact.

Here’s the deal.  You execute a search and you’re staring at your results.  Were you to press F5 now, you’d see the exact same page starting back at you (provided you hadn’t scrolled down at all).  This is the case because a search always refreshes the whole page, or more specifically, requests a new URL in the browser’s address bar.

That’s great and all, but then you decide that you want to sort on a different field.  So you do.  And because the table is dynamic, the sort occurs without refreshing the entire page.  Now, if you were to press F5, you’d be looking at the same search results but the original sort order would be restored.

The truth is that this situation still occurs today, and pretty much occurs in many DHTML apps unless you do some cool work to alleviate it (like I did at work, for a different but similar situation).  However, there’s more to the story.

Now, some links are smart enough to (essentially) ask the table about its condition and then make a request to the server based on the table’s answer.  So for example, if you click a link to add the results to a playlist, then the results will get added with the correct sort order.  And this could be done for every link on the page.

Except… they would have to become javascript links, vs. the straight HREF’s they are now.

Why does this matter?  Well let’s take allmusic.com as an example.  It used to be that some links – particularly those that were presented in list format – were javascript links.  If you attempted to open them in a new tab, well, any number of things could happen.  Recently allmusic made some changes and now many of their links are straight HREF’s, meaning you can open in a new window or new tab or whatever and it works as expected.

In our home audio interface we have the similar, occasional requirement of wanting to open a link in a new tab.  But if the link is a JS link, then you’ve got a problem.

It’s because of this that I decided that I didn’t want to present all of links as JS links.  But I still wanted to be able have the links reflect the state of the main table.

The solution I came up with is creative, though not the prettiest.

Basically, we leave the links as they are.  But, if the table changes for whatever reason, then we go out and explicitely rewrite links to reflect the changes.  This isn’t an ideal solution since it involves some overhead on the client.  But I can’t think of any other way to have regular HREF links which reflect changing properties elsewhere in the page.

So the act of initiating this update is fired by the table’s scroll handler.  I decided that this was the best place to start the update, as the scroll handler has the singular task of determing which rows/tiles are in the viewport and hence need to be brought in from the backend.  Recall that every time you scroll, it’s necessary to make sure that you’re actually looking at something.  This is the job of the scroll handler; he does some calculations, determines what should be there, then starts to calls to put those things “there”.  Since the scroll handler is so integral to the operation of our dynamic tables, I thought he should be the guy who starts the HREF updates.

Now, it’s entirely possible that there are hundreds, even thousands of links on a page at any point in time.  Fortunately, the majority of those links are contained in the scrolling table itself.  It’s on oversimplification, but I can say that those links are already aware of the state the table is in.

That leaves a handful of other links scattered around the interface.  Meaning, the actual process of updating those links is not particularly CPU intensive.  And I even have some nifty “process control” code that makes sure that only one update process can run at a time, and only the most recent one will run.

(fine, I’ll explain how it’s done.  The scroll handler starts the update process as an async process (using a JS timer).  It’s entirely possible that the scroll handler will fire again while the update process is still running – unlikely, but still possible.  So before an update process is started, a random process ID is generated and stored with the table’s other properties.  The update process is then “launched” and is told its process ID.  Once the update process starts to run (and while it’s running) it checks that the table’s record of the update process ID matches the processes own ID.  If there’s a mismatch – which will occur if a newer process is spawned – then the process halts and exits).

Anyhow, the end result is that a few things happen:

  • you can move to another page (zones, lists) and return to your browse results and the table will be positioned where you left it
  • you can resort your results, move to another page, then return to your browse results and the table will be positioned where you left it

Probably the only things that’s left, then, is to store the selections.  Sometimes you make some selections, but then you want to drill into a record and get more info.  This requires you to abandon your selections, or perhaps open the drill details in a new tab.  It would be nice, perhaps, if you could move around and not worry about your selections, knowing they will be there when you return to the browse results.  This is much harder to do though… I mean, there are ways around it using cookies, but I’m trying to avoid those methods.

A further refinement to occupant detection

Occupant detection has been working pretty well (for the surveillance system) but on occasion it’s had some hiccups.  These have been difficult to pin down to a particular cause/effect relationship, but I recently decided to take a step back and make some design decisions.

As things were written, the occupant detection code kind of hard one in the door and the other foot out of the door.  I say this, because the code still seemed to want to abide by a fixed schedule (to set the Home/Away macro states) but also rely on occupant detection to explicitely set these states.  And it dawned on me (much too late, perhaps) that this can cause unnecessary problems.

Conceptually, there’s really no need for a schedule if you’ve got an active, automated occupant detection system active.  There’s no need to set an explicit Home/Away state based on time-of-day or day-of-week, if the system is able to make a more representative determination of that state completely of its own accord.  And further, in the absence of a working detection solution (where that solution is present but broken for some reason) it makes more sense to default to an Away state – a sort of failsafe, if you will.

So my schedule pretty much said to set the system to Away at all times, and the occupant detection would override this.  But the thing is, we were bridging two worlds – there’s the detection world, which consists of occupant detection and also surveillance events.  Then there’s the notification world, which decides which events should lead to an administrative notification.  And the mechanism to pass state changes between these two worlds… well, rife with odd bugs.

So instead of chasing down these bugs – which is about as much fun as getting standardized web code to work in Internet Explorer – I decided that it made much more sense for the notification world to always believe itself to be in an Away state, while the detection/surveillance world would change based on occupant state.   Then, the notification world only has to query the detection world to see if anybody is home when an event occurs.

You might be wondering – if occupant detection and surveillance exist in the same world, why do we ever get to the point where the notification world needs to check in again with detection?  Why can’t detection tell surveillance to stop when it determines that occupants are Home?

The simplest answer is, again, conceptual.  The surveillance system is always active.  Even when we’re home, we want the system to start recording when it notices activity at the front door, for example.  We want that record to exist in case something odd happened.  And because surveillance is always active, notifications are always being generated.  The question is whether those notifications should be quenched.

So there it is then – the notification system has to query the detection system to see if occupants are home before a notification is sent.  If occupants are home, no notification is sent.   The surveillance event will still be recorded and archived, but no notification will be sent.  Then, if the detection system is in failure mode, the failsafe is that both worlds – detection/surveillance and notification – believe the entire system to be in Away mode, and a notification will be sent for every qualifying surveillance event.

This equates to a polling, or “pull” model.  The notification world has proven to be very stable, so in the current design we need an explicit “Home” result from the detection world before a notification is quenched.  And that’s the extent of the communications.  The surveillance world no longer attempts to push its state information to the notifications world.

So this approach has worked quite well over the last few days.  I’ll keep an eye on it, and of course I’ll be sure to report back here if something comes up 🙂

Collaborating a huge honking mess

Web 2.0 is nothing new.  Sites like Facebook and Flickr would not be half the sites they are today if not for the willingness of the masses to bring some semblance of order to the chaos.  Today we think of Web 2.0 and nobody bats an eyelash.

In some way, shape or twisted form, both Facebook and Flickr – and a host of  other Web 2.0 sites – are nothing but empty shells without the mass of user content that fills them.  Granted, Flickr’s initial mission was to bring order to the masses of images already on the web.  But there’s no denying that today, both services are intimately tied to user-generated content.

And it’s not enough to just amass content.  Google owes its bloated success to the very fact that it started off with a single focus in mind – bring order to the chaos.  Google itself has no content – that’s not entirely true, but when you think of its search engine and even its AdSense program, we’re talking about algorithms that sit atop the unfathomable amount of information out there on the Interwebs.  Whatever content Google has that it can call its own pales in comparison to the content on which it has built its business – content that’s not its own.

And that’s okay, because Google is good at what it does.  I like Google.  I don’t like Facebook.

Now, is it even right to put Google and Facebook in the same sentence?  One is a bonafied Web 2.0 company, the other isn’t.  And yet I believe that the answer is a resounding “YES!”, since Facebook is quite intent on becoming the most important thing to hit the Interwebs since… well, since Google.  Where Google crawls the Web and all of the loosely-connected content therein, Facebook is attempting to replace the content of the Web with the content within its own walled-garden.  Content created by the users, for the users.

Here’s the problem with Facebook.

People are not good at managing huge quantities of data.  We make spelling mistakes, we say “toMAYto” and “toMAHto” and mean the same thing, we say “bear” and “beer” and mean two different things.  One person sees a butterfly in an ink blot, another sees a unicorn.  Some of us are colorblind.  Some guys think Nicole Kidman is a knockout, others think her plastic is showing (I’ll take my N.K.  pre-“Peacemaker”, thank you very much).  20  people will watch Ali Velshi play with an iPad and record it, then post 20 different versions online of vastly differing quality and post individual links to each of their Youtube uploads.

In all of this, the single question is – who is correct?  Or, what’s the one right answer?

Who do we trust?  When Google attempts to bring order to chaos, it’s very simple – you either trust Google’s algorithm (and believe that Michelle Obama is a monkey – uh oh!) or you don’t.  It’s in Google’s best interest to keep their algorithm relevant and accurate.

Not so with Facebook.  Besides the ridiculous amount of information overload that comes with watching people try to one-up each other with the frequency of their status updates and wall posts, you also have the potential to see the same information repeated ad infinitum by a multitude of people (hello retweets!)  Logging into Facebook is like taking a trip down the rabbit hole, and getting out is harder than knocking the socks off of any Agent that the Wachowski brothers could ever dream up.

And then there’s the question of tagging.  Oh goodness, tagging.  Is that flower red, orange, or auburn?  Nuff said.

Before Web 2.0, we had IRC.  We had newsgroups.  My goodness, we even had Bulletin Board Systems.  Then we got Geocities, then MySpace, then Facebook.  I’m sure I’m missing some stuff, but that’s secondary.  Of prime importance is that the only thing that’s really changed is how much free reign we’ve given people to paint the world in their own colours and shove it on your monitor.  Then we’ve asked a million other people to comment on it, tag it, link to it – then we called it “the future” and attempted to monetize it.

I’m not sure which adage is more suitable: “Too many chefs in the kitchen”, or “An infinite number of monkeys with typewriters…”

Understand that I’m not against user-generated content.  Rather, I’m against free-form, user-directed collaboration.  I mean, if you have some data that you want classified into one of five categories, by all means let the users have at it – as long as all of the information gets classified, there are no duplicates within the data, the categories are strictly defined, and the majority wins.  Anything less is a failed experiment.

Facebook, take heed.