i see bots

In the future, you won't think this is so weird

What makes a “thing” a “thing” (in the IoT), anyway?

I feel that someone should attempt to define what a “thing” in the Internet of Things is, but I’m not sure it’s going to be me.

Still, at the start of all this, I did toss out a simple capability-based definition, but I’m wondering if the time might be right to take another shot at it, or to try to outline the classes of “things” that we might see in the fullness of time?

For instance, here’s a quick take on classes of “things”:

  • Thing Class #1: Single-purpose device with limited state, CPU, and connectivity capabilities. Not directly addressable (requires a “hub” or the like to communicate with the outside world). No UI. Examples: Z-Wave switch or 1-Wire sensor or a Philips Hue lightbulb.
  • Thing Class #2: Single-purpose device with  TCP/IP connectivity (including DHCP support), with support via some protocol for control API (ideally, REST) and “content” as necessary. May have UI. Example: a smart thermostat (such as a Nest).
  • Thing Class #2A: A subclass of #2, consisting of “hub”-type device whose main function is to orchestrate, and provide access to, a network of Class #1 devices.  Example: a 1-Wire hub or a proprietary sensor hub or a Philips Hue controller). The hub will likely support a protocol for discovering / browsing / adding / removing / initializing the devices it manages.
  • Thing Class #3: I think the main distinguishing factor here is that devices in these classes are capable of accepting and executing general-purpose code (“apps”) on a dynamic basis… in other words, these are small “computers” in the classic sense, and, as such, can be deployed to function as instances of other classes of devices (such as a Class #2A “hub” or as a Class #2 thermostat). Examples: An Arduino or Beagle board, or even a 5 year old used PC that you bought for $25 at a used computer store and installed Ubuntu on and are using for your home automation project.
  • Thing Class #4: I feel like it’s important to identify the class of things that includes “Connected TVs”, smart A/V receivers, or Sonos music players. These are high-function devices that would seem to be just Class 2 things. Yet, more and more, these kinds of things include a set of apps (for, say, viewing Amazon or Netflix content, or even browsing the web), they would seem to be Class #3 things. But somehow they don’t feel that way… they seem to be single-purpose (show video content or play audio content) and are sandboxed (restricted) with regard to which apps can be installed on them.

I think that’s it (for now). This is definitely not done yet… for instance:

  • I hate the “Class #1”-type nomenclature, so that will need some work
  • I’ve cleverly not included any analysis of cloud-based aspects of IoT… might be orthogonal, but does need to be considered… are there any examples of “cloud-based devices”? “Virtual devices”?
  • Another pivot to consider is the nature protocol for communicating with things that support TCP/IP. It seems like the world is settling on HTTP / REST as the way to access and control these devices. But there might be some important details here.
  • What is the nature of the connectivity? Is it intermittent by design (because, for instance, it’s a sensor on a mobile phone, or it’s a sensor on a car that is parked at most once a day at the user’s house, which contains the Wifi access point that it is authorized to access) or due to, say, weather conditions?


More on device updates: it’s harder than it looks

I writ large rambling the other day regarding my own hands-on experience with cascading failures due to, among other things, the perils of pushing large, complex, irreversible, and required updates to IoT appliances - such as “Connected TVs” and the like.

In order to try to move the ball forward here (and not just complain!), here’s my contribution… some ideas on rules/policies that device designers should follow when it comes to “update architecture”:

  • Reversible: Updates should be reversible, in case of external compatibility issues (the update introduces changes in its external interface that breaks its integration with other systems) or internal issues (the update fails to install completely, or otherwise renders the device unusable)
  • Predictable: Users / controlling systems should be in control as to when an update cycle (and any ensuing device reboot) is initiated.
  • Non-intrusive: Updates should download in the background and if possible install in the background
  • Preserves durable settings / state: Updates should respect existing settings and configurations where possible

Obviously, not every “thing” in the “Internet of Things” will have enough memory or CPU power available so that its software engineer will be able to reliably follow the rules above… for some “things”, there may not be any “background” in the process model, or enough memory to store a complete operating system / firmware image during a background download, or even a UI with which to offer the user a choice in the matter.

But you get the general point: keep the user or controlling system in control of the update process, and make it reversible in case there are problems.

The natural presumption is that, when it comes to updates, a “thing” doesn’t exert an outsized negative effect on the operation of the hierarchy / network that it’s a part of. You wouldn’t expect a low-level leaf “thing” in a constellation of “things” to require a reset of the entire system just to take an update. The amount of risk that it injects into the system should not be larger than its role in the system.

Conversely, the more intelligent “things” out there - “Connected TVs”, for instance - which are sold like appliances but in reality are complex, stateful, devices - represent large challenges for “update architecture”. These devices are complex enough that updates can take many minutes (40 in my Apple TV example) and induce significant risk. But, to the user, these “things” are “appliances”, so the average person’s expectations do not match the complexity of the device. I just want to watch a movie - tell me why I need to wait 40 minutes for an update that I didn’t ask for, can’t postpone, don’t understand the value proposition for, and am, in fact, leery of?

Getting the “update architecture” right is challenging. Look at Microsoft’s recent stumble here (where Microsoft had to withdraw its major Windows 8.1 update). I’d argue that it’s taken years for Microsoft to get their approach to updates right, and that they generally do a pretty good job at it, which makes this failure stand out even more.

PS: As I write this, I’m thinking that there might be other sets of rules to think about here… such as how to archive the state/settings of a system of “things”, and how to apply such an archive to a system (to restore it to a previous known state)? A post for another time.

On cloud-based IoT (Internet of Things)… and woesome device updates

Over at GigaOm, Mike Harris, CEO of Zonoff, kicks off a survey of recent “Connected TV” efforts with this comment:

“But how will the connected home make the jump from a favorite toy of the tech elite or a status symbol of the wealthy to be a ubiquitous technology that reaches the broad mass market of consumers? As usual, history offers some valuable lessons. In fact, we only have to go back a few years and look at how the connected TV market took shape to get a sneak peek of what will soon unfold for the connected home.”


In the article, the author posits a case for the future of “Connected TV” that has something to do with making the “Connected Home a reality for everyone”, presumably through the software platform his company offers.

I found the premise jarring, though, for other reasons: I happened to read the article after having experienced a kind of triple witching hour in our own house involving “Connected TV” (or, as it turns out, “Not Connected TV”).

Here’s the full story, in the hopes that there might be a moral in here somewhere…

The family had been talking all day about what movie to watch during “Family Movie Night”. Given this, we were all aware that any candidate offered for approval had to be first vetted for availability on either Amazon Instant Video or Netflix’ streaming service. After some discussion, we settled for “The Fantastic Mr Fox”, a totally excellent (says me) flick that the whole family can enjoy (as long as the adults in the room don’t snicker too much when the default expletive, “cuss”, is used in various forms… “What the cuss is going on here?!?”, “Who the cuss…”, and so on).

On a PC, I browsed to the requisite page on Amazon’s site (here), selected the HD version, and rented the movie. By then, the family had gathered in the living room, and the 15 year old had fired up the Sony BluRay player that we use for streaming Amazon content.

However, when the clumsy Sony software repeatedly indicated that it couldn’t access the movie service (or some variant of that generic message), we realized to our horror (yes, #firstworldproblems) that we might not be in for smooth sailing. Sure enough, a Twitter check revealed that we weren’t the only ones having difficulty. I believe the issue was that the Sony service - the Playstation Network - that the player relies on for handling requests was down. Good for Sony - they have a “network status” page, but - Bad for Sony - it showed “green”. Not for me, or other folks, apparently.

OK, family getting mildly restless. I head down to the basement to reset the router and cable modem, knowing that that wasn’t really going to make any difference, but it bought some time.

I turned to the Xbox. We hadn’t used it for a while, and so - knew this was coming - there was a huge update that it had to take before I could sign into Xbox Live. About 8 minutes later, the Xbox rebooted, and when I fired up the Amazon app, it failed to launch (with a hexadecimal error code - only MSFT would do that). I rebooted the Xbox but still no joy.

Family still in their seats, but making noises like “should we be making popcorn?” (from the 9 year old) and “the state of IT around here” (from the wife).

Then, inspiration struck, as I remembered reading that the latest iteration of the Amazon Instant Video app for iOS added support for AirPlay! Of course - that’s the ticket. Apple stuff always works!!

I fired up the iPad, installed the app, and switched the A/V receiver over to the Apple TV unit. Because the Apple TV unit is in the rack in the basement, I’ve been using the Apple Remote app for iOS to interact with it. (I do have an IR receiver in the Living Room and IR emitters on other equipment in the basement rack, but just not on the Apple TV unit as yet.)

I used the Remote app to wake up the Apple TV, which promptly realized that - wait for it - it, too, needed an update!

Now this was bad news, because (i) there’s been a history of botched Apple TV updates from Apple and (ii) in my experience, the updates take a long time. Sure enough, once the updated started, the nearly imperceptible updates to the progress bar predicted a long wait this time. The family decided to sit down to dinner instead of staring at the progress bar.

The Apple TV update took 40 minutes, but it didn’t brick the device. So, we were in business, right? No, not quite. The iPad version of the Amazon Instant Video app found our rental, but wouldn’t play it (suggesting that we try again later - how thoughtful!).

Now we were desperate, but there was one more device/path to try: I installed the Amazon Instant Video app on my iPhone 4S, found the rental, and fired it up over AirPlay, and it worked - for about 10 seconds, then stalled due to buffering. We paused the stream to let it buffer a bit, but then - more horrors! - I saw that the phone’s battery was down to 10%. That’s red territory. In one last clutch save of the evening, I found and deployed my external battery, a HyperJuice model MBP-100, typically used to augment my MacBook Pro; it also has a USB port, and when connected to a phone or iPad, can power it for days. (Actually, I think it could even start our car).

So, we were in business… mostly. The Amazon app on the iPhone would stall occasionally, and the device got alarmingly hot. But still, it was impressive: the iPhone 4S could pull down an HD stream and then sent it over the Apple TV device, delivering a reasonable experience.

At the popcorn break, I tried the iPad again (a 3rd generation unit), and for some reason, it was now willing to stream the film, so we resumed the movie on that device.

OK, so what’s the moral here?

I really wonder if Sony should ever write software or operate services… In opinion, they just don’t see very good at it, and the requirements of, and stakes in, operating services just keep going up.

On the other hand, I think Amazon and Netflix do a pretty good job with their steaming services. (And note that Netflix itself relies on Amazon’s AWS infrastructure.)

Beyond the initial failure, the issues revolved around the need for the devices to take large, mandatory updates that users have learned to fear due to the possibility of instability (or worse).

For instance, how is it acceptable that an appliance like the Apple TV could require a 40 minute update? Why is that acceptable? Or that the Amazon app simply fail to launch on Xbox?

If people are to trust the “Connected TV” scenario, the people supplying the software for the devices have to think different.

Perhaps updates need to be divorced from the UX path, with downloads and updates happening asynchronously in the background. It might require more storage on the device (extra space to download, unpack, and organize the update).

And when an update is “required”, that’s not very friendly, is it?

I realize that it’s simplest for the service provider if it can simply require that all connecting devices be operating at the latest and greatest version, requiring a download and update if not.

But as we move into a more IoT-oriented world, it’s going to get pretty messy if an update to a service causes a cascade of required device updates, which may in turn cascade to other, dependent devices. A failure somewhere in the chain - it could be as simple as a failed, or corrupted download - could disable entire scenarios (i.e., “watch a movie” or “open the garage door”), or classes of scenarios (i.e., “security”). That’s too risky to contemplate.

I think the IoT has to be able to operate in a state of partial inconsistency. Service providers and software providers for devices need to make progress here.

PS: On a somewhat related note: I was pleasantly surprised today, as we shifted off Daylight Savings Time this morning, that most of the devices in the house picked up the change just fine. Just our (stupid, unconnected) oven and bedside clocks (the parents’, not the kids’ “atomic clocks”) needed to be manually adjusted. That’s what I’m talking about. Maybe by this time next year, we will have made some progress on those laggard devices.

PPS: Thinking more about Sony… a contrarian view would be this: “Look, you paid $225 for that Sony BluRay player four years ago, and you’ve been using Sony’s Playstation Network for free since then… how do you expect them to pay for the cost of operating that service, let alone make any money from it?”. The consumer device business, especially when it comes to BluRay players, is a race to the bottom. But the expectations keep going up - the players need to support Wifi, support apps, and otherwise distinguish themselves. But no consumer is going to pay a premium at the time of purchase, nor pay a monthly subscription fee to the manufacturer (such as Sony) to run that service. Netflix or Amazon will get those dollars. So, its a loss-leader for Sony and others, I think, to operate their service networks.

When viewed in this light, I’m not surprised that the Sony service seems to be gaining a reputation for unreliability.

Having said this: How hard could it be??🙂 It’s true that the scale of the problem for Sony is much smaller than that of Netflix or Amazon… Sony “just” needs to operate a directory service for its BluRay players, to dispatch their users to, say, Netflix.

Netflix, on the other hand, has to actually operate a video streaming service, which is a planet-scale challenge.

life imitates game

I’m not sure whether this is impressive, surprising, or just cynically ironic… but: what happens when you combine the competitiveness of online gaming with meatspace motion sensors and connected devices? You get: GreenGoose!

With GreenGoose (presumably you can blame the name on the mess that is the .com namespace) you can “Track, your lifestyle”. GreenGoose is a “real-world game platform that automatically measures things that you actually do”… as opposed to those things you do online, in a game world, which we all know don’t count. With GreenGoose, you can track, say, how much exercise you get, how often you brush your teeth, or presumably anything else that one of their tiny motion sensors can detect and report on. The more that you do that you say you’re going to do (“intentions”), the more “lifestyle” points you get.

The competitive aspect of this is, of course, similar to what you’d find in any game, but the “meatspace” aspect is similar to FourSquare, where you’re encouraged to “check into” the restaurants, cafes, and other venues you visit, with the goal of becoming “mayor” of those establishments. It’s “only” a virtual accomplishment, but that hasn’t stopped millions of people (reference, reference) from registering for the service.

I’m not sure how interesting this will be to the average person, but with a starting price of $24 for the kit, it might be an interesting hacking target. The sensors don’t appear to be wireless, though. But the content on the website seems to imply that the sensors are small and perhaps embedded into stickers that you attach to, say, your dental floss container:

With the promise of developer and vendor APIs and partner opportunities, we could see some interesting scenarios: “Got a product with a healthy consumer behavior that can be measured in a fun, helpful way? We’ve got the patent pending sensors and algorithms. Let’s talk.”

PS: Imagine the Valentine’s Day gift scenarios!!

Blog topic: “The Internet of Somethings” and “G-Force”(the movie) (PLOT SPOILERS)

The opportunity to comment on how The Internet of Things is portrayed by Hollywood doesn’t come along often. I hereby lay down a prediction in this post that when Hollywood does speak on this topic, it will always involve an deranged gazillionaire. Here’s my first point of evidence.

In “G-Force” (IMDB link), several rodents, endowed with human-level intelligence, the ability to communicate, often sardonically, via spoken english, and equipped with an NSA-scale budget and a set of cuddly human overseers, take on an evil corporate type who’s made a global name for himself in the small appliances market.

To avoid giving away tooooo much of what could be described as a “delicate” plot line, let me just say that, apparently, every one of the appliances made by this far-thinking guy has an extra special “something” inside that “comes alive” when a secret communications chip is enabled, creating a mesh of “somethings” intent on doing evil. It’s not quite the Singularity… maybe closer to Skynet meets Mr Coffee, I guess.

There are a couple of extra twists in the plot to presumably adhere to some writers union rules, but otherwise, that’s it: the Internet of Things will hit the mainstream consciousness when our espresso makers start spewing hot lead instead of hot coffee.

An aside: “Circuit Diagram”

If you’re not familiar with “xkcd”, then drop what you’re doing and check it out (but in a new browser window, ok??)


This one is an all-time favorite: http://xkcd.com/730

It’s been around for a while, but every time I stumble upon it, I have to stop and take it all in again. It’s chock-full of deep, seasoned, nerd humor. “Bury deep, but not too deep.” “Omit this if you’re a whimp.” “Ardruino for blog cred.” Oy.  A tough act to follow.


Net Neutrality and the Connected Home? (updated)

Buried in last week’s net neutrality proposal by the FCC was an interesting tidbit about the connected home.

As you may know, the controversial proposal put forth a requirement that all broadband providers allow their customers to access all (legal) online content. You are probably thinking, “I thought they were already required to do that??”

Actually, no. In April of this year, the FCC lost an important court ruling when the United States Court of Appeals for the District of Columbia ruled that the FCC lacked the authority to enforce the notion that providers could not discriminate against content services carried over their pipes. This case arose out of a finding the FCC issued in 2008 in which it ruled that Comcast had secretely blocked or slowed (“shaped”) traffic associated with the BitTorrent Service.

In May, the FCC chairman, Julius Genachowski, indicated in response that he was considering regulating broadband providers using the same regulatory powers the agency uses to regulate telephone networks. The difference is that broadband providers have been regulated as “information services”, not as “telecommunications services”; the latter brings with it much more scrutiny and regulation. This was a controversal proposal; service providers claimed that such a level of regulation would stifle the huge investments required to maintain and expand the country’s broadband networks.

However, in last week’s proposal, Genachowski did not take that significant step, but he did set forth a basic “no blocking” requirement while allowing providers to “shape” traffic as necessary to ensure network health, as long as providers provided notice of their practices. Wireless providers would be given more leeway, given the more limited nature of their networks. There was also a discussion about usage-based pricing, although this is less relevant, given that many major wired and wireless providers have implemented tiered or capped service levels.

(Personally, I found all of this ironic, given that in my neighborhood in Seattle, the best that the local telephone monopoly can offer me is dial-up networking. Comcast’s is the only wired broadband service available to me. So if I find that I don’t like Comcast’s price tiers or “shaping” practices, I really don’t have a choice. I suppose I could turn to 4G wireless services, but from what I can tell, those are still in the infancy stage, combining three or more of these unfortunate characteristics: expensive, relatively slow, limited to fairly low monthly data transfer ceilings, limited to certain geographic areas, and unreliable.)

I found these articles to be excellent summaries of the proposal: PC Magazine and New York Times.

I noticed the following statement in several of the articles on this topic:

“In addition, the proposal would let broadband providers experiment with routing traffic from specialized services such as smart energy grids and home security systems over dedicated networks, as long as the practice doesn’t slow down the public Internet.” (Yahoo!)

“The F.C.C. also will allow companies to experiment with the offering of so-called specialized services, providing separate highways outside the public Internet for specific uses like medical services or home security.” (New York Times)

What does this mean, I wonder? When I think about cloud-based home automation (see “A PIVOT TOWARDS THE CLOUD?” section in this post), or IP-based remote security monitoring, what I worry about isn’t latency but rather reliability. I wonder, then… will Comcast be offering me, say, some kind of uptime guarantee for a premium in their broadband offerings of the future?

What kind of broadband connectivity availability guarantee would be acceptable to me? Let’s say that they offer “three 9s” of availability, or 99.9%, which translates to just under 9 hours of downtime in a given year. (See a handy chart here)

“Three 9s” is the SLA (Service Level Agreement) that Amazon offers for its AWS offerings. If Amazon fails to meet that SLA, the customer is compensated in the form of a credit that appears on a future bill.

Would 9 hours of downtime a year be acceptable for a home automation system? I’d say that if those 9*3600=32,400 seconds of downtime were spread out uniformly across the year (say, 89 seconds each night between the hours of 2am and 3am!), I wouldn’t mind. But if all of those seconds of downtime hit all at once, it would be unacceptable.

To be fair, my current home PC-based home automation system has been down for hours at a time, usually planned but not always. A goal in moving to the cloud (all other things being equal) would be to leverage the actual professionals employed there to keep redundant systems healthy.

What about 99.99% availability? That’s just under a hour (53 minutes) a year. Much better, and probably even acceptable, but from what I can tell, only a few hosters offer that level of service, much less broadband providers.

As far as I know, the only services providers who venture into the rarefied air of “five 9s” - 99.999% - are the POTS guys - the telephone company. The infrastructure investment is impressive. I don’t think that anyone else, outside the corporate data center, bothers, though. Cloud-based architectures are designed to deal with 99.9%.

The possibility of significant disruptions in connectivity is the biggest objection as I think about the characteristics - positive and negative - of a cloud-based home automation or security monitoring service in the cloud. If there’s a possibility that the broadband connection could go down for hours at a time, the functions you’d offload to the cloud would end up being trivial or secondary at best.

Given that it’s unlikely that you could find a 99.99% availability SLA from a broadband provider, and even if you did, it would be very expensive, you might consider that continued thinking about a cloud-based approach for home automation would be a waste of time.

But there is another approach: consider implementing a “diverse” secondary broadband connection. For instance, if your main provider is Comcast, then you’d have a secondary emergency connection via, say, a 3G or 4G wireless provider (supplied by a provider other than Comcast). You would, in effect, be banking on the low probability that both providers would fail at the same time, and paying for that level of availability in the form of a second (wireless) connection that would rarely see service (and which, therefore, you hope would be inexpensive). Your connection to the cloud would be via a fast main pipe, backed up by a (perhaps cheaper) and slimmer backup pipe.

It’s probably too simplistic to compute the combined “SLA” of this primary/secondary connection architecture, but let’s do it anyway. Let’s assume that your primary provider is Comcast… I have anecdotal evidence that they don’t deliver at the 99.9% level… let’s call it it 99.7% (26 hours). Let’s assume that the wireless connection is slightly more reliably, given its roots as a telephone service - say, 99.99%. Therefore, the probability that both services would fail at the same time is something like: (26 hours/8766 hours in a year)  * (1 hour/8766 hours in a year) = .3% * .01 % = .003%. Are those acceptable odds?

Let’s assume that those are indeed reasonable odds. So if you’re happy about your bandwidth connection being reliable, what about the service you’re connecting to? If it’s hosted at Amazon AWS, then we’re back to the 99.9% level. But perhaps there are things we can do about that. More on this later.

(There are other critical objections to the concept of a home automation system in the cloud… I’ve found this thread on AVSForum to be interesting for this reason. The signal-to-noise ratio varies greatly over the 10 or so pages of this thread, but I found very interesting comments from a number of folks on that thread… but your mileage may vary.)

UPDATE #1, 6 December 2010: In the original version of this post, I said that “what I worry about isn’t latency but rather reliability”. Well, I found myself thinking about that statement a bit more, and decided to run some simple tests to figure out if that statement was warranted.

So I ran some very simple tests: I fired up an Amazon EC2 instance, telneted to it, and ran “ping” from there to my home gateway (a Linksys appliance), courtesy of Comcast’s home broadband offering (3 mb/s upload, I believe). I used a dynamic DNS service to resolve the address of my gateway. These are the results, via ping:

Amazon EC2 instance running in “us-east-1d” availability zone: ~92 ms

Amazon EC2 instance running in “us-east-1a” availability zone: ~99 ms

This compares with something like 2- to 2.9 ms ping times within my LAN (a PC on my LAN pinging my gateway).

So… the cloud is ~ 30 times ‘farther away’ as compared to my home’s LAN. Is this an issue?

Here’s a thought experiment: Let’s imagine that I’ve got Home Automation functions running on an EC2 instance. Let’s further imagine that a human in my house flips on a switch which has been automated in some way. How long would it take a simple Home Automation gateway on my LAN to notice that state change, communicate that fact to the EC2 instance, and send back a response? It would appear that cloud latency alone would require ~200ms alone for a round-trip. My very informal tests with Z-Wave (using HomeSeer’s “Z-Seer” utility) indicate that communications with a Z-Wave switch takes a small number of milliseconds. And if I assume that the code running in the EC2 instance can turn around a response in a small numver of milliseconds, then it would appear that the cloud latency times will dominate this scenario. Perhaps this cloud-based scenario could execute, end-to-end, in under, say, 300 ms. Would this be “interactive” enough for human-scale Home Automation? I don’t know. It might be.

I think the next test would be to actually code up a simple scenario that actually runs in the cloud and see how well it works… if anyone out there reading this has any ideas about this, would love to hear it…

UPDATE #2, 6 December 2010: Interestingly enough, saw these posts today:

  • “Get Ready for $99 Security, Home Automation from Comcast/Xfinity” (link)
  • “AT&T Acquires Xanboo, Developer of Home Automation Platform” (link)



More “100”

In my post on “100 bots”, it appears that I unwittingly tapped into a “100” theme. ReadWriteWeb reports that “ThingMagic”, a company that seems to focus on all things RFID, has been building a list of 100 things you can do with RFID. Each entry in the list is an informal blog-like post with a short description of the application, with links to more information and exploratory questions for the reader.

It’s an interesting list, if you keep in mind the fact that it’s coming from a company that sells RFID technology. It is a bit uneven; not all of the “100 uses” are at the same level or carry the same weight. Some of the applications described actually exist, while others are hypothetical.

As I toured the list, I noticed a common thread in several articles describing human-tracking applications… these articles ended with an open-ended question along these lines: do you, dear reader, find this application promising enough to outweigh the privacy concerns that often arise when RFID? Some examples:

  • “Can You See Mi Now?” describes a bicycle-safey application implemented in the Danish city of Grenå: “the city implemented battery-powered RFID readers at busy intersections designed to read RFID tags placed in the steering columns of bikes.  When a cyclist approaches and stops at an intersection, the RFID reader sends a notice to an electronic sign mounted on the traffic light pole.  This notice triggers the display of a flashing ‘cyclist’ image, indicating that a rider is near and drivers should look before making a turn.” The idea is that a motorist would notice the flashing warning and take extra care when turning. The article ends with this question: “Does addressing a real safety issue - like reducing bicycle related deaths and injuries - move you past privacy concerns you may have with RFID?”
  • “India’s National ID Card Program” entry outlines aspects of India’s initiative to store fingerprints and iris scans for all of its citizens, with the goal of, among other things, delivering better services while reducing fraud. This initiative is apparently accompanied by RFID-equipped national identity cards. This article ends with the question: “What are your thoughts about the growing use of RFID and biometric-enabled national ID cards?  Do the proposed benefits of modernization, reduced fraud, and security outweigh the potential risks?”
  • “RFID-Enabled Smart Displays” describes a new kind of synthetic vision-equipped public area display that are smart enough to tailor advertisements and other information based on what clues they can discern about of the person standing in front of it … such as gender, age range, and height. The article teases out how that customization could be ever more interesting if the person was wearing an RFID tag which allowed the display to access more “personal preferences”? The article ends with this question: “Share your thoughts about the evolution of smart signs.  Where will they work?  Where won’t they work?  How are personal data security issues best addressed?”

It’s commendable that these articles point out the potential for privacy concerns when it comes to tracking people via RFID technology, even if solutions are not proposed. Elsewhere on the ThingMagic site, privacy is described as a future topic that needs to be addressed:

“New technical and policy approaches will have to solve the real privacy and security concerns identified by industry analysts, technologists, and public watchdogs. If not, restrictive legislation or public backlash could thwart widespread acceptance—and limit the powerful benefits that RFID offers businesses and consumers.”

There’s also a “Dead Tags Don’t Talk” discussion, about how retailed-purposed tags - such as those embedded in clothes, for inventory purposes - can be disabled at the point of sale, so that their owners aren’t trackable afterwards.

I did wonder, however, why “privacy” wasn’t one of the 136 tags with which these articles were tagged.

My take on today’s RFID is that it’s great for things but not yet ready for people, outside of job/organizational security applications.

Robots in the News

A quick recap of a few recent mainstream articles concerning “robots” and robot platforms…

  • Protecting Your Home From Afar With a Robot” (New York Times, Nov 3, 2010): An informal tour of available low-cost robots and discussions with their owners, who seem to have found an inexpensive and fun way to experiment with “telepresence”, especially for surveillance of the home. Some of these robots are designed to be hacked.
  • Drones Get Ready to Fly, Unseen, Into Everyday Life” (Wall Street Journal, Nov 2, 2010): A focused look into progress on the consumer and military fronts to create small, autonomous flying drones. A quote from the article: a “goal is to develop a drone the size of a pizza box with small propellers that can watch a soldier’s back on the battlefield.”

(UK Daily Mail treatment of the same topic/article, here).

Some observations

There’s an explosion of innovation happening at the intersection of amateur aircraft building and software/hardware hacking, spurred on, I’d guess, by advances in powerful tiny electric motors (made possible by really powerful magnets), battery technology, and the availability of cheap and easy-to-integrate subsystems for controlling motors, sensors (GPS, accelerometers, gyroscopes, cameras, ultrasound), and connectivity. Throw in the availability of carbon fiber for strong, lightweight chassis.

Hobbyists seem to be genuinely pushing the envelope with cheap and capable designs integrated via powerful processing units. I challenge you to spend 5 minutes on the DIY Drones site and not be impressed with the energy and ingenuity of the community there. (For one thing, it makes it easy to impress when your projects are “self-documenting”… that is, drones with on-board video cameras).

And on the ready-to-fly/consumer front, I’m impressed with the level of technology offered. For instance, this is the list of coolness that AR.Drone brings to the table (as written on the site):

  • “A quadricopter made in carbon fiber and high resistance PA66 plastic
  • MEMS (Micro-Electro-Mechanical Systems) and video processing to ensure a very intuitive piloting of a radio controlled object
  • Wi-Fi and video streaming for a modern interface with an iPhone™ or iPod touch®
  • Images processing software for augmented reality”

Of the four consumer robots that I found (list below) via the articles note above, or via other casual searches, all four are designed to be hacked. I was able to determine that three of the four are based on Linux, and my guess is that the fourth one is as well.

It’s cool that manufacturers have jumped onto hacker bandwagon so strongly. This would be a great way to teach kids about programming for the real world, especially if they’ve outgrown MindStorms.

I’m wondering whether Google’s Android operating system (a variant of Linux) will see any adoption in this space? Something’s happening on the Android phone hardware front, for sure: a number of folks are building robots using Android-based cell phones as the controllers, system integrators, or remotes (here’s an example.) Given that your average smart phone has significant CPU, memory, and sensor resources, all that’s needed to make a robot is a chassis, motors and controllers, etc. For instance, the Google Nexus One has a 1Ghz processor and 512MB of memory… the robots listed below typically sport processors running at half that clock rate.

Beyond surveillance and getting kids interested in hacking, how are these cheap robots being used? For helping people living at home who might need extra attention, for one thing, according to Hoaloha Robotics.

(Another tip of the hat to Charles, for forwarding a number of these links.)

An initial set of related links, more to come:

  • New York Times’ list of Robot articles: “News about robots, including commentary and archival articles published in The New York Times”


  • Rovio (from WowWee, ~$179US): “the groundbreaking new Wi-Fi enabled mobile webcam that lets you view and interact with its environment through streaming video and audio.” Includes API documentation, apparently for client web apps that access its built-in web server. Couldn’t easily figure out what software/hardware it’s based on. Loved their tagline: “Rovio - now you can finally be in two places at once!”
  • Spykee (from Meccano: http://spykeeworld.com, $329US): “WiFi spy robot”. Firmware is provided in source form. Appears to be Linux-based, with an ARM processor.
  • AR.Drone (http://ardrone.parrot.com/parrot-ar-drone/usa, $299): “The flying video game”. “First quadricopter that can be controlled by an iPhone/iPod Touch/iPad.” Includes an on-board video camera. Updatable firmware. SDK available for game developers. ARM processor running a Linux OS.
  • Spy Gear Spy Video TRAKR (http://www.spygear.net, $129). Can download apps from a catalog and also build apps on your own with a web-based “IDE”. ARM processor running a Linux OS (on both the robot and the remote!).


  • robodance.com (http://robodance.com): “Robodance is the ultimate software program for your WowWee Rovio”; other robots also supported.

Communities / Blogs

Bonus links: Robots and How we view them (these will be funny right up until Skynet becomes self-aware):

  • “Television’s Greatest Robots: A Video Timeline” (Gawker)
  • Baca Robo (“Stupid Robot”) Contest, Budapest, Hungary (via io0.com): Not in English, but fun to watch even with the sound turned down. The lab coats are a nice touch.

100 Devices

Imagine that at some point in the not-too-distant future, you’re the owner of a ‘smart’ house, which, you’re told, contains 100 smart devices. That’s a lot, you think to yourself. What are they all doing?? Here’s an plausible inventory:

  • Security devices: one device per window, to detect open/close/breakage, and a number of motion sensors, for a total of 30 devices
  • Survellance cameras: one for each entrance, one for each side of the house, for a total of 6 devices
  • Thermostats: one for each room, for a total of 10
  • Smart light switches: one for each room and hallway, and several more for outside lights, the garage, etc, for a total of 25
  • A/V and Control wall panels for most rooms, for a total of 5
  • Devices representing main controllers for automation, security, A/V components, VOIP, personal computers, file servers, routers, access points, and broadband modems, for a total of 10

The above adds up to 86 (see Note 1), and while I’m writing this post, I’m sure I’ll think up ideas for 14 more. (see Note 2)

And I didn’t include devices that get around, such as:

  • Mobile phones, which aren’t tethered to the house but may spend a good portion of their time in the house.
  • Portable music players (iPods, etc) and other portable personal devices
  • Your cars or trucks (I’m not sure that it counts as “news” that the new Chevy volt has an IP address, but it’s interesting that it’s part of the headline), and significant components in/on your vehicle, such as: the tire pressure monitoring system (but be careful!), the A/V system, etc)
  • Stuff you might wear, such as a smart watch or telemetry-sending exercise shoes

If you ponder this future for a moment, you might arrive at these conclusions and observations:

The number of devices that we’ll rely on, for a wide range of ‘personal scenarios’, will exceed our ability to directly manage them. We’ll know they’re there, working on our behalf, but we’ll likely forget the details about how to manage or configure them outright, reminded of the need only when one stops working or it needs some maintenance (see note 3), or you need it to do something out of the ordinary.

And even if you did try to individually manage a device, it’s likely that you’ll do it remotely, via a web page or specialized application: the device itself will be too small to support direct manipulation (lacking, say, a display and buttons), or the range of options and configurations will be too complex to adequately manage via the simple display and buttons that are on the device (example: thermostats), or the device is unreachable because it’s physically embedded into the house (such as wireless security sensors for windows).

For these reasons, I think we’ll see an architecture where the devices are proxied by a device portal or manager which aggregates basic information such as state and health, enabling a user experience that supports views based on filters over state, alarms triggered on state or health, etc.

The device manager will know, for some classes of devices, such as those which are TCP/IP-enabled - how to directly interrogate them directly or how to subscribe to updates. Other kinds of devices, such as those which aren’t TCP/IP-enabled, may require an intermediate hub or subsystem to broker the communication between the device manager and the devices themselves, such as with a security system (where the window sensors may be simple switches) or a 1-Wire bus, which requires a hub to communicate with.

It may also embody some basic configuration/management, such as “reset”, and then link off to a device-specific management page, served up by the device itself or by a type-specific device hub.

The device manager will represent the devices to the outside world, at least for read operations, isolating them from frequent requests for updates (early Proliphix documentation recommended restricting the API request rate to “a few requests per minute”). You’d want the manager to offer an RSS feed, not the devices themselves. The device manager could also implement some level of security/access permissions.

It’s more than a simply proxy, however… I’d expect it to also know the assigned name, location and type/intended purpose of each device, and provide views/filters around that: “Show me the state of all of the security devices on the second floor”. This implies a device registry or directory. My guess is that you wouldn’t expect or want most devices to handle this metadata on their own.

Finally, you’d also expect that the device manager would implement some level of APIs and scripting, for sophisticated eventing and notifications. Thinking this through, it’s clear that the device manager would need to implement ‘psuedo variables’ representing it’s view of the current state of the devices it’s managing. And, furthermore, what’s just been described here could be approximated by any of several Home Automation systems… depending on the approach, it wouldn’t necessarily be as elegant as you’d like, but it could be done.


In this discussion so far, I’ve not made too much of a distinction between the kinds of access that you’ll need to your devices. Sometimes you want to “configure” a device, and sometimes you want to “use” it. For a thermostat, “configuration” means things like: setting up the kind of heater (single-stage / double-stage, heater only or heater/AC, etc), the setback schedule, and so on. “Using” the device is typically a simple affair: override the current setpoint, for a specified period of time.

You’d generally expect (but I’m not sure I can prove it) that the number of “configuration” options to be at least equal to, or greater than, the number of “use” options, especially as the level of capability of the device increases: more functionality implies more state, and possible actions based on that state, which implies more configuration.

You could imagine entirely separate paths for “configuration” and “use”; in fact, in many cases, it may be very desirable to ensure this. Adhering to the “simpler is better” maxim, you’d want to keep the UI for “use” as simple as possible, and keep “configuration” UI, which might require different paradigms for efficient management, separate.


With the Proliphix Thermostats I use, you “configure” via embedded web pages. The configuration experience, delivered via browser, is fairly sophisticated (for a thermostat!). On the other hand, the on-device display and associated 5 buttons are focused almost completely on what the end-user needs: what’s the current temperature, what’s the current setpoint, and how do I get some heat? Some of the UI is dedicated to read-only access to basic configuration info (such as the current IP address), but that info is there to help the user provide good hints to the maintenance team.

In any case, it would be very hard to imagine an on-device user experience for, say, specifying setback schedules for specific days of the month or holidays using the on-device display and buttons. As the designer for the device, you wouldn’t even try. Being able to “express” yourself - as the designer of the “configuration” experience - via web pages means you can expose a lot of useful features in a natural way.

The inverse is probably true, too: don’t build it in the hardware if it can’t be configured by the admin. The embedded web server, while adding $20-$40 to the build cost of the device, means you can offer a lot more functionality and charge for it.

(An aside: what about the “blinking 12:00 AM” VCR clock problem of years past? What this solely due to poor UI design? Or should we just blame the user? Later-model VCRs learned to pick off a time signal from the TV signal, which apparently eliminated the problem for most folks. But if you had this UI challenge in front of you today - not just getting the user to set the correct local time, but also, say, the task of programming the VCR to record a program - would you push it off to a web-based experience, even if it increased the hardware costs of the device by, say, $50?)

So, even though the device in this case is relatively small, the configuration experience is ‘outsized’. Even as devices decrease in size while increasing in capability, they could offer an ‘outsized’ user experience.

If the device (or its hub) supports APIs, you could imagine more than one flavor of UI, in support of various scenarios: beyond the built-in “configuration” and “use” UIs, multiple other “use” UIs could be supported, via APIs, in the form of other web experiences or apps. A device manufacturer might leave the heavy-lifting - the implementation of an elegant end-user UI, for instance - to third-party developers who specialize in that sort of thing.

A colleague of mine imagines a “Facebook for Devices”, a kind of third-party portal where you can see recent updates from your devices and others of interest - and casually pivot on the data in various ways, potentially offering you an easier way to keep track of your 100 devices.

These third-party or extended experiences don’t have to be limited to web sites; imagine a display-oriented device with the sole job of displaying RSS or Twitter-like feeds from multitudes of other devices.


If all of the devices you care about are under one roof, then you could imagine using dedicated device to act as the device manager for all of your devices. It could be implemented via a low-end PC or embedded PC, running headless, with a connection for the local area network, and perhaps dedicated I/O connections for specialized types of devices (such as 1-Wire). You’d access its web page via your LAN. Its device directory, and device data archive, could be based on permanent storage on the device itself or elsewhere on your LAN.

An alternative approach might be for the device manager based in the cloud. You’d likely still need a local agent to connect your home devices, likely stuck behind a firewall/NAT, through to the cloud, and, of course, to ensure that non-TCP/IP-based devices are proxied adequately to the LAN and then on to the cloud.

The resource requirements for a cloud-based manager are likely to be small (just as those of a home-based manager would be), perhaps modest enough that you could use the recently-announced AWS “Free Usage Tier“, which offers a non-trivial level of resources at no cost.

Going to the cloud might mean improved reliability… if your home-based device manager falls over, it’s up to you to detect it, diagnose it, fix it, and get everything running again, while, say, in the AWS cloud, if a disk dies, you’re never supposed to notice it. They have professionals on the case.

Accessing your cloud-based device manager - the UX or the APIs - may be more performant than a home-based manager, especially if you’re outside your LAN, since it’s likely that the cloud-based manager has better peering and connectivity than the slow-ish uplink of your consumer-grade broadband connection. And if it’s possible that your device manager might experience multiple simultaneous requests on a regular basis, putting in the cloud may yield better results.

A cloud-based approach also feels more elegant when the scope of devices goes beyond “the home”. If you include your mobile devices, or devices from your business or other organizations (such as local weather stations or similar services representing virtual devices that you use for more sophisticated eventing, etc), having a cloud-based device agent will make all of this easier, for the reasons already listed: reliability, bandwidth (lower latency, higher thruput, more simultaneous connections), as well as: the availability of an infinite amount of CPU and disk resources, as well as services or capabilities that may be too complicated to contemplate for home hosting. (My same colleague suggests that in-home cameras could serve up images or video that is shipped to the cloud for sophisticated vision processing that couldn’t be done either by the camera or by any software/hardware that’s likely to be installed in the home.)

Of course, a possible downside is that if your home’s broadband connection fails, the Cloud-based portal may miss device updates and therefore may not fire important events. Perhaps a hybrid model is in order, where critical events are handled locally, with the cloud portal handling most other events.

This suggests a future architecture where the cloud is the central management point, with local support - in the form of hardware/software - in the home to handle the vagaries of getting all of your devices connected to the cloud. Again, there are legitimate concerns regarding loss of connectivity, and perhaps security.

Moving some or all of the management of a device mesh to the Cloud is an interesting enough scenario that I’ll be trying it out as soon as I can. More on this later.


OK, so we’re now used to the idea that our devices are talking to the cloud, updating their Facebook or Twitter status. In the discussion above, it was the devices that connected up to the cloud, in pursuit of new scenarios that made the device more valuable. But what if the intention involves going the other way? What if cloud services extended down to the devices, in service of new scenarios?

Mike Kuniavsky in his book, “Smart Things: Ubiquitous Computing User Experience Design” (Amazon) describes “Service Avatars”, a term he coined that conveys the added value a focused-purpose device can bring to a service. He holds up the Apple iPod as a prime example of a Service Avator… while it wasn’t the first music player, it was the first to successfully deliver a service-based scenario - a cloud-based music store - down to a device. The device was cool, but it became so much more important when connected to its music service. (We can safely elide certain details, such as the fact that for full functionality, the device requires an intermediary “hub”, in the form of a Mac or PC running the iTunes application).


#1: I just checked my router. It’s managing DHCP addresses for 15 devices (PCs, fileservers, phones, a VOIP adapter, access point, A/V components, weather station, and a printer). I’ve assigned fixed addresses for another 11 devices (thermostats, Ha7NET, camera server, security system, etc). That adds up to 26 IP addresses, and 26 corresponding devices. If you throw in the other devices of varying smartness…

  • 1-Wire temperature sensors (5)
  • Security sensors (10 or so)
  • Z-wave light swithces (10)

… as of now, I’m up to at least 51 devices. So 100 devices isn’t too far a stretch for a “smart home of the future”.

#2: Wait! Here are 20 more “smart home of the future” devices:

  • Connected exercise equipment (1 per house)
  • Roving security or health robots (1 per house)
  • Connected appliances, which report energy usage, low supplies, or general health statistics (oven, microwave, refrigerator, dishwasher, washer, dryer, furnace, hot water heater) (10 per house)
  • Resource-monitoring equipment, such as: water metering, electricity metering, natural gas or oil metering (oil tanks that report when they’re nearly empty, or leaking!) (3 per house)
  • Smart sprinklers that only water when absolutely necessary, and never after a rainfall, and only when given the OK by the local water board, augmented up by soil moisture sensors (5 per house),

#3: I’m writing a portion of this post on the eve of that date in the Fall in the US where we “Fall Behind” with our clocks. How many devices in the future home will need resetting every Spring and Fall? You’d hope the answer is “zero”… I take a small perverse pleasure in noting, with each passing Fall and Spring, whether the number of ‘things’ around the house that need to be manually pushed back or forward an house, is decreasing.