i see bots

In the future, you won't think this is so weird

Category Archives: Musings

Net Neutrality and the Connected Home? (updated)

Buried in last week’s net neutrality proposal by the FCC was an interesting tidbit about the connected home.

As you may know, the controversial proposal put forth a requirement that all broadband providers allow their customers to access all (legal) online content. You are probably thinking, “I thought they were already required to do that??”

Actually, no. In April of this year, the FCC lost an important court ruling when the United States Court of Appeals for the District of Columbia ruled that the FCC lacked the authority to enforce the notion that providers could not discriminate against content services carried over their pipes. This case arose out of a finding the FCC issued in 2008 in which it ruled that Comcast had secretely blocked or slowed (“shaped”) traffic associated with the BitTorrent Service.

In May, the FCC chairman, Julius Genachowski, indicated in response that he was considering regulating broadband providers using the same regulatory powers the agency uses to regulate telephone networks. The difference is that broadband providers have been regulated as “information services”, not as “telecommunications services”; the latter brings with it much more scrutiny and regulation. This was a controversal proposal; service providers claimed that such a level of regulation would stifle the huge investments required to maintain and expand the country’s broadband networks.

However, in last week’s proposal, Genachowski did not take that significant step, but he did set forth a basic “no blocking” requirement while allowing providers to “shape” traffic as necessary to ensure network health, as long as providers provided notice of their practices. Wireless providers would be given more leeway, given the more limited nature of their networks. There was also a discussion about usage-based pricing, although this is less relevant, given that many major wired and wireless providers have implemented tiered or capped service levels.

(Personally, I found all of this ironic, given that in my neighborhood in Seattle, the best that the local telephone monopoly can offer me is dial-up networking. Comcast’s is the only wired broadband service available to me. So if I find that I don’t like Comcast’s price tiers or “shaping” practices, I really don’t have a choice. I suppose I could turn to 4G wireless services, but from what I can tell, those are still in the infancy stage, combining three or more of these unfortunate characteristics: expensive, relatively slow, limited to fairly low monthly data transfer ceilings, limited to certain geographic areas, and unreliable.)

I found these articles to be excellent summaries of the proposal: PC Magazine and New York Times.

I noticed the following statement in several of the articles on this topic:

“In addition, the proposal would let broadband providers experiment with routing traffic from specialized services such as smart energy grids and home security systems over dedicated networks, as long as the practice doesn’t slow down the public Internet.” (Yahoo!)

“The F.C.C. also will allow companies to experiment with the offering of so-called specialized services, providing separate highways outside the public Internet for specific uses like medical services or home security.” (New York Times)

What does this mean, I wonder? When I think about cloud-based home automation (see “A PIVOT TOWARDS THE CLOUD?” section in this post), or IP-based remote security monitoring, what I worry about isn’t latency but rather reliability. I wonder, then… will Comcast be offering me, say, some kind of uptime guarantee for a premium in their broadband offerings of the future?

What kind of broadband connectivity availability guarantee would be acceptable to me? Let’s say that they offer “three 9s” of availability, or 99.9%, which translates to just under 9 hours of downtime in a given year. (See a handy chart here)

“Three 9s” is the SLA (Service Level Agreement) that Amazon offers for its AWS offerings. If Amazon fails to meet that SLA, the customer is compensated in the form of a credit that appears on a future bill.

Would 9 hours of downtime a year be acceptable for a home automation system? I’d say that if those 9*3600=32,400 seconds of downtime were spread out uniformly across the year (say, 89 seconds each night between the hours of 2am and 3am!), I wouldn’t mind. But if all of those seconds of downtime hit all at once, it would be unacceptable.

To be fair, my current home PC-based home automation system has been down for hours at a time, usually planned but not always. A goal in moving to the cloud (all other things being equal) would be to leverage the actual professionals employed there to keep redundant systems healthy.

What about 99.99% availability? That’s just under a hour (53 minutes) a year. Much better, and probably even acceptable, but from what I can tell, only a few hosters offer that level of service, much less broadband providers.

As far as I know, the only services providers who venture into the rarefied air of “five 9s” - 99.999% - are the POTS guys - the telephone company. The infrastructure investment is impressive. I don’t think that anyone else, outside the corporate data center, bothers, though. Cloud-based architectures are designed to deal with 99.9%.

The possibility of significant disruptions in connectivity is the biggest objection as I think about the characteristics - positive and negative - of a cloud-based home automation or security monitoring service in the cloud. If there’s a possibility that the broadband connection could go down for hours at a time, the functions you’d offload to the cloud would end up being trivial or secondary at best.

Given that it’s unlikely that you could find a 99.99% availability SLA from a broadband provider, and even if you did, it would be very expensive, you might consider that continued thinking about a cloud-based approach for home automation would be a waste of time.

But there is another approach: consider implementing a “diverse” secondary broadband connection. For instance, if your main provider is Comcast, then you’d have a secondary emergency connection via, say, a 3G or 4G wireless provider (supplied by a provider other than Comcast). You would, in effect, be banking on the low probability that both providers would fail at the same time, and paying for that level of availability in the form of a second (wireless) connection that would rarely see service (and which, therefore, you hope would be inexpensive). Your connection to the cloud would be via a fast main pipe, backed up by a (perhaps cheaper) and slimmer backup pipe.

It’s probably too simplistic to compute the combined “SLA” of this primary/secondary connection architecture, but let’s do it anyway. Let’s assume that your primary provider is Comcast… I have anecdotal evidence that they don’t deliver at the 99.9% level… let’s call it it 99.7% (26 hours). Let’s assume that the wireless connection is slightly more reliably, given its roots as a telephone service - say, 99.99%. Therefore, the probability that both services would fail at the same time is something like: (26 hours/8766 hours in a year)  * (1 hour/8766 hours in a year) = .3% * .01 % = .003%. Are those acceptable odds?

Let’s assume that those are indeed reasonable odds. So if you’re happy about your bandwidth connection being reliable, what about the service you’re connecting to? If it’s hosted at Amazon AWS, then we’re back to the 99.9% level. But perhaps there are things we can do about that. More on this later.

(There are other critical objections to the concept of a home automation system in the cloud… I’ve found this thread on AVSForum to be interesting for this reason. The signal-to-noise ratio varies greatly over the 10 or so pages of this thread, but I found very interesting comments from a number of folks on that thread… but your mileage may vary.)

UPDATE #1, 6 December 2010: In the original version of this post, I said that “what I worry about isn’t latency but rather reliability”. Well, I found myself thinking about that statement a bit more, and decided to run some simple tests to figure out if that statement was warranted.

So I ran some very simple tests: I fired up an Amazon EC2 instance, telneted to it, and ran “ping” from there to my home gateway (a Linksys appliance), courtesy of Comcast’s home broadband offering (3 mb/s upload, I believe). I used a dynamic DNS service to resolve the address of my gateway. These are the results, via ping:

Amazon EC2 instance running in “us-east-1d” availability zone: ~92 ms

Amazon EC2 instance running in “us-east-1a” availability zone: ~99 ms

This compares with something like 2- to 2.9 ms ping times within my LAN (a PC on my LAN pinging my gateway).

So… the cloud is ~ 30 times ‘farther away’ as compared to my home’s LAN. Is this an issue?

Here’s a thought experiment: Let’s imagine that I’ve got Home Automation functions running on an EC2 instance. Let’s further imagine that a human in my house flips on a switch which has been automated in some way. How long would it take a simple Home Automation gateway on my LAN to notice that state change, communicate that fact to the EC2 instance, and send back a response? It would appear that cloud latency alone would require ~200ms alone for a round-trip. My very informal tests with Z-Wave (using HomeSeer’s “Z-Seer” utility) indicate that communications with a Z-Wave switch takes a small number of milliseconds. And if I assume that the code running in the EC2 instance can turn around a response in a small numver of milliseconds, then it would appear that the cloud latency times will dominate this scenario. Perhaps this cloud-based scenario could execute, end-to-end, in under, say, 300 ms. Would this be “interactive” enough for human-scale Home Automation? I don’t know. It might be.

I think the next test would be to actually code up a simple scenario that actually runs in the cloud and see how well it works… if anyone out there reading this has any ideas about this, would love to hear it…

UPDATE #2, 6 December 2010: Interestingly enough, saw these posts today:

  • “Get Ready for $99 Security, Home Automation from Comcast/Xfinity” (link)
  • “AT&T Acquires Xanboo, Developer of Home Automation Platform” (link)

 

 

More “100”

In my post on “100 bots”, it appears that I unwittingly tapped into a “100” theme. ReadWriteWeb reports that “ThingMagic”, a company that seems to focus on all things RFID, has been building a list of 100 things you can do with RFID. Each entry in the list is an informal blog-like post with a short description of the application, with links to more information and exploratory questions for the reader.

It’s an interesting list, if you keep in mind the fact that it’s coming from a company that sells RFID technology. It is a bit uneven; not all of the “100 uses” are at the same level or carry the same weight. Some of the applications described actually exist, while others are hypothetical.

As I toured the list, I noticed a common thread in several articles describing human-tracking applications… these articles ended with an open-ended question along these lines: do you, dear reader, find this application promising enough to outweigh the privacy concerns that often arise when RFID? Some examples:

  • “Can You See Mi Now?” describes a bicycle-safey application implemented in the Danish city of Grenå: “the city implemented battery-powered RFID readers at busy intersections designed to read RFID tags placed in the steering columns of bikes.  When a cyclist approaches and stops at an intersection, the RFID reader sends a notice to an electronic sign mounted on the traffic light pole.  This notice triggers the display of a flashing ‘cyclist’ image, indicating that a rider is near and drivers should look before making a turn.” The idea is that a motorist would notice the flashing warning and take extra care when turning. The article ends with this question: “Does addressing a real safety issue - like reducing bicycle related deaths and injuries - move you past privacy concerns you may have with RFID?”
  • “India’s National ID Card Program” entry outlines aspects of India’s initiative to store fingerprints and iris scans for all of its citizens, with the goal of, among other things, delivering better services while reducing fraud. This initiative is apparently accompanied by RFID-equipped national identity cards. This article ends with the question: “What are your thoughts about the growing use of RFID and biometric-enabled national ID cards?  Do the proposed benefits of modernization, reduced fraud, and security outweigh the potential risks?”
  • “RFID-Enabled Smart Displays” describes a new kind of synthetic vision-equipped public area display that are smart enough to tailor advertisements and other information based on what clues they can discern about of the person standing in front of it … such as gender, age range, and height. The article teases out how that customization could be ever more interesting if the person was wearing an RFID tag which allowed the display to access more “personal preferences”? The article ends with this question: “Share your thoughts about the evolution of smart signs.  Where will they work?  Where won’t they work?  How are personal data security issues best addressed?”

It’s commendable that these articles point out the potential for privacy concerns when it comes to tracking people via RFID technology, even if solutions are not proposed. Elsewhere on the ThingMagic site, privacy is described as a future topic that needs to be addressed:

“New technical and policy approaches will have to solve the real privacy and security concerns identified by industry analysts, technologists, and public watchdogs. If not, restrictive legislation or public backlash could thwart widespread acceptance—and limit the powerful benefits that RFID offers businesses and consumers.”

There’s also a “Dead Tags Don’t Talk” discussion, about how retailed-purposed tags - such as those embedded in clothes, for inventory purposes - can be disabled at the point of sale, so that their owners aren’t trackable afterwards.

I did wonder, however, why “privacy” wasn’t one of the 136 tags with which these articles were tagged.

My take on today’s RFID is that it’s great for things but not yet ready for people, outside of job/organizational security applications.

100 Devices

Imagine that at some point in the not-too-distant future, you’re the owner of a ‘smart’ house, which, you’re told, contains 100 smart devices. That’s a lot, you think to yourself. What are they all doing?? Here’s an plausible inventory:

  • Security devices: one device per window, to detect open/close/breakage, and a number of motion sensors, for a total of 30 devices
  • Survellance cameras: one for each entrance, one for each side of the house, for a total of 6 devices
  • Thermostats: one for each room, for a total of 10
  • Smart light switches: one for each room and hallway, and several more for outside lights, the garage, etc, for a total of 25
  • A/V and Control wall panels for most rooms, for a total of 5
  • Devices representing main controllers for automation, security, A/V components, VOIP, personal computers, file servers, routers, access points, and broadband modems, for a total of 10

The above adds up to 86 (see Note 1), and while I’m writing this post, I’m sure I’ll think up ideas for 14 more. (see Note 2)

And I didn’t include devices that get around, such as:

  • Mobile phones, which aren’t tethered to the house but may spend a good portion of their time in the house.
  • Portable music players (iPods, etc) and other portable personal devices
  • Your cars or trucks (I’m not sure that it counts as “news” that the new Chevy volt has an IP address, but it’s interesting that it’s part of the headline), and significant components in/on your vehicle, such as: the tire pressure monitoring system (but be careful!), the A/V system, etc)
  • Stuff you might wear, such as a smart watch or telemetry-sending exercise shoes

If you ponder this future for a moment, you might arrive at these conclusions and observations:

The number of devices that we’ll rely on, for a wide range of ‘personal scenarios’, will exceed our ability to directly manage them. We’ll know they’re there, working on our behalf, but we’ll likely forget the details about how to manage or configure them outright, reminded of the need only when one stops working or it needs some maintenance (see note 3), or you need it to do something out of the ordinary.

And even if you did try to individually manage a device, it’s likely that you’ll do it remotely, via a web page or specialized application: the device itself will be too small to support direct manipulation (lacking, say, a display and buttons), or the range of options and configurations will be too complex to adequately manage via the simple display and buttons that are on the device (example: thermostats), or the device is unreachable because it’s physically embedded into the house (such as wireless security sensors for windows).

For these reasons, I think we’ll see an architecture where the devices are proxied by a device portal or manager which aggregates basic information such as state and health, enabling a user experience that supports views based on filters over state, alarms triggered on state or health, etc.

The device manager will know, for some classes of devices, such as those which are TCP/IP-enabled - how to directly interrogate them directly or how to subscribe to updates. Other kinds of devices, such as those which aren’t TCP/IP-enabled, may require an intermediate hub or subsystem to broker the communication between the device manager and the devices themselves, such as with a security system (where the window sensors may be simple switches) or a 1-Wire bus, which requires a hub to communicate with.

It may also embody some basic configuration/management, such as “reset”, and then link off to a device-specific management page, served up by the device itself or by a type-specific device hub.

The device manager will represent the devices to the outside world, at least for read operations, isolating them from frequent requests for updates (early Proliphix documentation recommended restricting the API request rate to “a few requests per minute”). You’d want the manager to offer an RSS feed, not the devices themselves. The device manager could also implement some level of security/access permissions.

It’s more than a simply proxy, however… I’d expect it to also know the assigned name, location and type/intended purpose of each device, and provide views/filters around that: “Show me the state of all of the security devices on the second floor”. This implies a device registry or directory. My guess is that you wouldn’t expect or want most devices to handle this metadata on their own.

Finally, you’d also expect that the device manager would implement some level of APIs and scripting, for sophisticated eventing and notifications. Thinking this through, it’s clear that the device manager would need to implement ‘psuedo variables’ representing it’s view of the current state of the devices it’s managing. And, furthermore, what’s just been described here could be approximated by any of several Home Automation systems… depending on the approach, it wouldn’t necessarily be as elegant as you’d like, but it could be done.

USE VS CONFIGURE

In this discussion so far, I’ve not made too much of a distinction between the kinds of access that you’ll need to your devices. Sometimes you want to “configure” a device, and sometimes you want to “use” it. For a thermostat, “configuration” means things like: setting up the kind of heater (single-stage / double-stage, heater only or heater/AC, etc), the setback schedule, and so on. “Using” the device is typically a simple affair: override the current setpoint, for a specified period of time.

You’d generally expect (but I’m not sure I can prove it) that the number of “configuration” options to be at least equal to, or greater than, the number of “use” options, especially as the level of capability of the device increases: more functionality implies more state, and possible actions based on that state, which implies more configuration.

You could imagine entirely separate paths for “configuration” and “use”; in fact, in many cases, it may be very desirable to ensure this. Adhering to the “simpler is better” maxim, you’d want to keep the UI for “use” as simple as possible, and keep “configuration” UI, which might require different paradigms for efficient management, separate.

REMOTE UI

With the Proliphix Thermostats I use, you “configure” via embedded web pages. The configuration experience, delivered via browser, is fairly sophisticated (for a thermostat!). On the other hand, the on-device display and associated 5 buttons are focused almost completely on what the end-user needs: what’s the current temperature, what’s the current setpoint, and how do I get some heat? Some of the UI is dedicated to read-only access to basic configuration info (such as the current IP address), but that info is there to help the user provide good hints to the maintenance team.

In any case, it would be very hard to imagine an on-device user experience for, say, specifying setback schedules for specific days of the month or holidays using the on-device display and buttons. As the designer for the device, you wouldn’t even try. Being able to “express” yourself - as the designer of the “configuration” experience - via web pages means you can expose a lot of useful features in a natural way.

The inverse is probably true, too: don’t build it in the hardware if it can’t be configured by the admin. The embedded web server, while adding $20-$40 to the build cost of the device, means you can offer a lot more functionality and charge for it.

(An aside: what about the “blinking 12:00 AM” VCR clock problem of years past? What this solely due to poor UI design? Or should we just blame the user? Later-model VCRs learned to pick off a time signal from the TV signal, which apparently eliminated the problem for most folks. But if you had this UI challenge in front of you today - not just getting the user to set the correct local time, but also, say, the task of programming the VCR to record a program - would you push it off to a web-based experience, even if it increased the hardware costs of the device by, say, $50?)

So, even though the device in this case is relatively small, the configuration experience is ‘outsized’. Even as devices decrease in size while increasing in capability, they could offer an ‘outsized’ user experience.

If the device (or its hub) supports APIs, you could imagine more than one flavor of UI, in support of various scenarios: beyond the built-in “configuration” and “use” UIs, multiple other “use” UIs could be supported, via APIs, in the form of other web experiences or apps. A device manufacturer might leave the heavy-lifting - the implementation of an elegant end-user UI, for instance - to third-party developers who specialize in that sort of thing.

A colleague of mine imagines a “Facebook for Devices”, a kind of third-party portal where you can see recent updates from your devices and others of interest - and casually pivot on the data in various ways, potentially offering you an easier way to keep track of your 100 devices.

These third-party or extended experiences don’t have to be limited to web sites; imagine a display-oriented device with the sole job of displaying RSS or Twitter-like feeds from multitudes of other devices.

A PIVOT TOWARDS THE CLOUD?

If all of the devices you care about are under one roof, then you could imagine using dedicated device to act as the device manager for all of your devices. It could be implemented via a low-end PC or embedded PC, running headless, with a connection for the local area network, and perhaps dedicated I/O connections for specialized types of devices (such as 1-Wire). You’d access its web page via your LAN. Its device directory, and device data archive, could be based on permanent storage on the device itself or elsewhere on your LAN.

An alternative approach might be for the device manager based in the cloud. You’d likely still need a local agent to connect your home devices, likely stuck behind a firewall/NAT, through to the cloud, and, of course, to ensure that non-TCP/IP-based devices are proxied adequately to the LAN and then on to the cloud.

The resource requirements for a cloud-based manager are likely to be small (just as those of a home-based manager would be), perhaps modest enough that you could use the recently-announced AWS “Free Usage Tier“, which offers a non-trivial level of resources at no cost.

Going to the cloud might mean improved reliability… if your home-based device manager falls over, it’s up to you to detect it, diagnose it, fix it, and get everything running again, while, say, in the AWS cloud, if a disk dies, you’re never supposed to notice it. They have professionals on the case.

Accessing your cloud-based device manager - the UX or the APIs - may be more performant than a home-based manager, especially if you’re outside your LAN, since it’s likely that the cloud-based manager has better peering and connectivity than the slow-ish uplink of your consumer-grade broadband connection. And if it’s possible that your device manager might experience multiple simultaneous requests on a regular basis, putting in the cloud may yield better results.

A cloud-based approach also feels more elegant when the scope of devices goes beyond “the home”. If you include your mobile devices, or devices from your business or other organizations (such as local weather stations or similar services representing virtual devices that you use for more sophisticated eventing, etc), having a cloud-based device agent will make all of this easier, for the reasons already listed: reliability, bandwidth (lower latency, higher thruput, more simultaneous connections), as well as: the availability of an infinite amount of CPU and disk resources, as well as services or capabilities that may be too complicated to contemplate for home hosting. (My same colleague suggests that in-home cameras could serve up images or video that is shipped to the cloud for sophisticated vision processing that couldn’t be done either by the camera or by any software/hardware that’s likely to be installed in the home.)

Of course, a possible downside is that if your home’s broadband connection fails, the Cloud-based portal may miss device updates and therefore may not fire important events. Perhaps a hybrid model is in order, where critical events are handled locally, with the cloud portal handling most other events.

This suggests a future architecture where the cloud is the central management point, with local support - in the form of hardware/software - in the home to handle the vagaries of getting all of your devices connected to the cloud. Again, there are legitimate concerns regarding loss of connectivity, and perhaps security.

Moving some or all of the management of a device mesh to the Cloud is an interesting enough scenario that I’ll be trying it out as soon as I can. More on this later.

A PIVOT TOWARDS “SERVICE AVATARS”?

OK, so we’re now used to the idea that our devices are talking to the cloud, updating their Facebook or Twitter status. In the discussion above, it was the devices that connected up to the cloud, in pursuit of new scenarios that made the device more valuable. But what if the intention involves going the other way? What if cloud services extended down to the devices, in service of new scenarios?

Mike Kuniavsky in his book, “Smart Things: Ubiquitous Computing User Experience Design” (Amazon) describes “Service Avatars”, a term he coined that conveys the added value a focused-purpose device can bring to a service. He holds up the Apple iPod as a prime example of a Service Avator… while it wasn’t the first music player, it was the first to successfully deliver a service-based scenario - a cloud-based music store - down to a device. The device was cool, but it became so much more important when connected to its music service. (We can safely elide certain details, such as the fact that for full functionality, the device requires an intermediary “hub”, in the form of a Mac or PC running the iTunes application).

Notes

#1: I just checked my router. It’s managing DHCP addresses for 15 devices (PCs, fileservers, phones, a VOIP adapter, access point, A/V components, weather station, and a printer). I’ve assigned fixed addresses for another 11 devices (thermostats, Ha7NET, camera server, security system, etc). That adds up to 26 IP addresses, and 26 corresponding devices. If you throw in the other devices of varying smartness…

  • 1-Wire temperature sensors (5)
  • Security sensors (10 or so)
  • Z-wave light swithces (10)

… as of now, I’m up to at least 51 devices. So 100 devices isn’t too far a stretch for a “smart home of the future”.

#2: Wait! Here are 20 more “smart home of the future” devices:

  • Connected exercise equipment (1 per house)
  • Roving security or health robots (1 per house)
  • Connected appliances, which report energy usage, low supplies, or general health statistics (oven, microwave, refrigerator, dishwasher, washer, dryer, furnace, hot water heater) (10 per house)
  • Resource-monitoring equipment, such as: water metering, electricity metering, natural gas or oil metering (oil tanks that report when they’re nearly empty, or leaking!) (3 per house)
  • Smart sprinklers that only water when absolutely necessary, and never after a rainfall, and only when given the OK by the local water board, augmented up by soil moisture sensors (5 per house),

#3: I’m writing a portion of this post on the eve of that date in the Fall in the US where we “Fall Behind” with our clocks. How many devices in the future home will need resetting every Spring and Fall? You’d hope the answer is “zero”… I take a small perverse pleasure in noting, with each passing Fall and Spring, whether the number of ‘things’ around the house that need to be manually pushed back or forward an house, is decreasing.

Simple

“Simplicity is the ultimate sophistication. - Leonardo da Vinci (here and here)

I couldn’t pass up the opportunity to use a quote from L. DaVinci as an introduction to this blog post. But it’s really does set the right tone for a topic that’s been creeping up behind me on pad-feet, creaking the floor boards and occasionally breathing a little loudly. It’s always been there, but now it’s time to write about it.

A main theme for home automation should be, I think, to add value - in terms of convenience, security, energy efficiency, etc - while simplifying. Put another way, if you bring a new scenario to bear, don’t do it in such a way as to add complexity. Just add value, not complexity. Hold the complexity! Or make something simpler than it was.

It’s hard to make something simple, or simpler, while adding cool new functionality. Another quote is due:

To paraphrase Einstein (“simply” because I can):

“Make things as simple as possible, but not simpler” - Einstein (here)

Some questions to ponder when considering adding a new scenario:

  • How “natural” does the new scenario seem to the average user?
  • How much “training”, if any, is required?
  • How much of what the user already knows can be used to leverage the new scenario?
  • How robust is the scenario in the face of unexpected user actions or input, perhaps using existing controls or devices? Or other failures (such as power failures, loss of internet connectivity, etc)?
  • How are existing scenarios changed?
  • What additional “workloads” does the new scenario introduced for the user?

For example, some considerations when installing light switches that can be controlled via Home Automation software:

  • Do the switches operate like ordinary switches? Will users just “know” how to use them, because they operate just like other switches around the house?
  • What “value” are you introducing with these fancy new switches… is it for energy conservation? Security? What plan will you put in place to avoid confusing or frustrating users, or, worst-case, leave them tripping in the dark, feeling the walls for the switch for the light that just turned off for some reason?
  • If the HA software is programmed to turn the lights off under certain conditions, how will users react? Can this automation be over-ridden?
  • If the HA software is programmed to turn the lights on under certain conditions, will it also have a plan to turn them off (to conserve power or reduce user workload)?

Consider a (newer) Z-Wave light switch. One of the things I like about Z-Wave is that my controller software (Homeseer) can quickly detect when the user turns the switch on or off (see note 1). This means that I can fire events based on a change in the state of the switch. So I can implement a timer for the switch: if the user turns on the light, I want it to be turned off in, say, 20 minutes, because people in my house don’t know how to turn off lights (or so it seems). So far, so good. But, if someone turns the light on and then off 5 minutes later (that would be me) and THEN turns it back on a minute later, that 20 minute timer needs to be reset. It takes some scripting to make this happen (to clear out any pending “off” events for that switch).

This is an example of using some extra cycles (in the form of additional scripting) to hide the complexity from the user and build in some robustness. It’s also an example of the importance of 2-way communication between devices. It’s hard to create a smart scenario where unpredictable humans are involved if the controls involved aren’t able to communicate bi-directionally. Early Z-Wave devices, and earlier technologies, such as X-10, could not do a good job of keeping the HA software in the loop when the state of the device changed. If the user could turned on a light, the HA software might not know it, or might not know it for several minutes. If you were interested in adding value with these switches, it had to be done carefully to avoid frustrating the user.

Another example: what happens when you “automate” the A/V stack in your living room employing the usual approach of programming a fancy universal remote control (see note 2)? If my household was any indication, you’re introducing a new world of hurt. It seems that the usual approach leaves out the possibility that the user might have the audacity to actually touch the equipment, or perform a step in some order other than what’s been prescribed. This is usually because the typical remote control communicates in 1 direction only: it talks, but does not listen. It has no clue as to whether those commands have been correctly received, or whether it’s own model for the state of the components it thinks it’s controlling is accurate (see note 3). If the hapless user happens to, say, turn on the DVD player by pushing the power button on the player, in order to, say, insert a DVD and then picks up the universal remote to “Play DVD”, confusion results. The hapless universal remote, not knowing that the DVD player is already on will likely send a “power toggle” remote control signal to the DVD player, which will promptly turn if off. The remote is none the wiser. The user, though, is sure that something is screwed up. In an ideal world, the remote and the controlled components would talk to each other. In a slightly-less but still workable world, all component manufacturers would implement discrete remote codes for “power” commands and the like - not toggle commands.

In our house, we’ve moved beyond universal remote controls. At some point, it struck me as to how much of a compromise they represented, in terms of the user experience. Instead of using the remote that came with the component - DVD player, game console, tuner, etc - we were trying to shoe-horn all of the specific functions into a single, large, oddly-shapped remote that also tried to control the components using a 1-way protocol.

It slowly dawned on me that I could take a contrarian approach: if you want to watch a DVD, why not pick up the DVD remote and just use that remote for everything? That remote obviously already has a power button. And, the kicker is, it also has volume up/down buttons - presumably because it can control receivers from the same manufacturer or be pressed into service as a universal remote. So everything one needs to watch a DVD - power, volume, transport, and other DVD-specific functions - are represented as buttons on that one remote. So why not design a DVD scenario around that DVD remote? Similarly, the game console remote had all the buttons one needs to use it as well as a set of volume and mute buttons. Well, this is odd. Let’s go with it.

The approach I took was to reliably and in real-time mirror as software variables in the HA software (virtual devices in Homeseer) the power state of each component. If the DVD player was turned on, I needed the corresponding “DVD Power” virtual device in Homeseer to change instantly to “On”. Ditto the Xbox and any other source components. This was important to get right in order to handle the situation where a pesky human touches something in the A/V stack to, say, switch out a DVD. It took a while to figure out how to do this. For now, I’ll summarize it as follows:

  • DVD player: my particular player, a Sony BluRay player, sports a USB port on the front. When the player is powered on, that port is powered up. I built a simple circuit to sense when +5 volt signal is present at that port and change the state of the “DVD Power” virtual device in Homeseer to “On”. When the +5 volt signal is removed, the virtual device state is set to “Off”.
  • XBox: took a bit of work (and voided the warranty!). The Xbox also has USB ports, but these remain powered on even when the Xbox is “off”. Foiled! So the trick I used for the DVD player wouldn’t work here. I did, however, find a way to tap into the wires leading to the fans, which are fed a varying level of voltage when the console is on (presumably depending on how hot the console is). I built a simple circuit to detect when any positive voltage is present on the fans, and update a “Xbox Power” virtual device in Homeseer accordingly.

Once I had power state variables I could rely on, I was on the way to implementing this “pick up just the remote for the source you want” scenario. If you want to watch a DVD, pick up that remote and hit the power button. The DVD player will turn on, and Homeseer will take note of it: it will run some additional scripts to turn on the Receiver and the Monitor (which entailed making use of the RS-232-based command sets offered by those two components). Enabling the use of the DVD remote’s volume up/down buttons took more work, involving an PC-based IR receiver/transmitter (the USB UIRT).

I’ll post a more detailed explanation of all this in a follow-up post. But the bottom line is that it’s all working now. The existing controls - the remotes, the buttons on the components themselves - still work as expected, and in fact complement the new scenario. The universal remote control is packed away in the closet, beeping now and then as its battery fades.

A final example involves a scenario inspired by a contractor who was working in our bathroom. He kept saying, “Get it out! Get it all out!!”, when we talked about sizing the exhaust fan. He really (really) believed in the importance of clearly the bathroom quickly of steam from the shower. It not, you run the risk of mildew or rot and the resulting structural problems. Always run the fan while showering, and then for 10-15 minutes afterwards.

In my house, however, few other occupants really took this message to heart. If they remembered to turn the fan on, it was typically after the shower was over and the walls were already dripping with condensation. And no one remembered to turn it off after the required 15 minutes, thus triggering complaints from me about noise and wasted electricity.

I moved the fan to a Z-Wave controlled circuit, and scripted it so that it would turn off automatically after 15 minutes. That’s a good start, but doesn’t solve the problem of getting people to turn the fan on during the shower, not after it’s over and the place is already fogged up.

What would the average user expect when asked to describe a “automatic shower fan”? I’d say this: the fan should turn on automatically when someone starts the shower, and not turn off until 15 minutes after the shower ends. That’s a great goal statement. But, as you can imagine, from an implementation perspective, it seemed like a downright gnarly problem to solve. But it was solved, albeit with some extra hardware (Ha7NET hub), 1-Wire devices, and more complex-than-usual scripting). The “simplest solution” has been working well for a couple of years now. More on that later.

In closing: I like ThingM‘s motto: “Smart devices make things simple”. I think that’s a good criteria to evaluate when deciding whether your ‘bots and automation plans are actually adding value.

Notes:

  1. If I’ve set it up correctly. Due to various issues with various versions of Homeseer, it’s not a given that this is always the case. More on this later.
  2. I’ve owned two Logitech Harmony Universal Remotes, but, alas, I can’t recommend them. Both have ended up being disappointed for various reasons… the programming experience is awful, but it pales to the issues caused by the poor product build quality, hardware flakiness, and support. The most recent model I owned is the Harmony 890, which can send signals via IR or RF. If you’re considering one of these, please take a look at the reviews on Amazon first. On paper, it showed promise…
  3. Too often, component manufacturers take the lazy approach to the power button on the remote, by implementing it was a toggle. Pushing the “power” button on the average remote often sends a single command which the component interprets as this: “if you’re on, turn off; if you’re off, turn on”. This means that the average universal remote must remember if a component is on or off in order to implement a scenario like this: “the user is currently watching a DVD but would now like to play a game on the game console… so, turn off the DVD player and turn on the game console”. If the remote thinks that the game console is already on, it will skip sending a “toggle power” remote code. If, on the other hand, the game console has discrete “power on” and “power off” codes, the remote can confidently send “power on”, which do nothing if the game console is already on. I think that “discrete codes” are one of the marks of a component/device that’s designed to be system-integrated.

The Internet of Things

When I started this “iseetbots” blog, I blithely assumed that it was self-evident what terms like ‘Bot’ or ‘Connected Device’ mean.

Similarly, every time I heard the term “Internet of Things’, I blithely assumed I knew what that term meant (and that my interpretation matched everyone else’s🙂.

Boy, was I wrong. So here’s an informal summary of a quick look-see into the “Internet of Things”. My first, and probably not my last.

As a meme, “Internet of Things” (IoT) has hit the big time. There are lots of blog posts, dedicated media site coverage, top-ten lists, a few conferences, distinguished research labs are hiring researchers, a council, a consortium, analyst coverage, a couple of startups, and – w a I t  f o r  i t – a Wikipedia entry.

OK, so IoT is here. What is it, then?

The upshot of my quick and non-scientific investigation is that it for many people, at this point in time, IoT describes the emerging mesh of self-identifying objects that helps keep track of things for us (and, in a dystopian world, helps our governments keep track of us). In the short-term, think RFID.

The CASAGRAS (“Coordination And Support Action for Global RFID-related Activities and Standardisation”) council (in the EU) discusses various definitions, including one offered by an SAP Researcher:  “A world where physical objects are seamlessly integrated into the information network, and where the physical objects can become active participants in business processes.”

Businesses, especially those with inventory or supplies, etc, need to stay abreast of this trend.  Now! The “Internetome” conference announced itself with this warning: “The Internet of Things is here now, and it’s going to get big and quickly…The earlier your organisation gets to grips with the opportunities, as soon as you can identify and plot a journey over the hurdles and around the pitfalls… the sooner you can innovate to maintain and grab competitive advantage.”

IBM seems to have made IoT an important aspect of their “Smarter Planet” initiative / strategy / other, the need for which they motivate like so:

“At IBM, we mean that intelligence is being infused into the systems and processes that make the world work—into things no one would recognize as computers: cars, appliances, roadways, power grids, clothes, even natural systems such as agriculture and waterways.”

A key capability revolves around all that data that’s being generated by all of those devices:

Data is being captured today as never before. It reveals everything from large and systemic patterns—of global markets, workflows, national infrastructures and natural systems—to the location, temperature, security and condition of every item in a global supply chain. And then there’s the growing torrent of information from billions of individuals using social media. They are customers, citizens, students and patients. They are telling us what they think, what they like and want, and what they’re witnessing. As important, all this data is far more real-time than ever before.

And here’s the key point: data by itself isn’t useful. Over the past year we have validated what we believed would be true—and that is, the most important aspect of smarter systems is data—and, more specifically, the actionable insights that the data can reveal.

Anyway, “Smarter Planet” is at a… planet-like scale that only IBM could muster – the SmarterPlanet website is huge and the range of IBM products and services huger. It seems they’ve wrapped their entire business around this concept. More on this later.

+++

The writer Bruce Sterling invented the term “spime” to describe a class of devices with these characteristics:

  • Small, inexpensive means of remotely and uniquely identifying objects over short ranges; in other words, radio-frequency identification.
  • A mechanism to precisely locate something on Earth, such as a global-positioning system.
  • A way to mine large amounts of data for things that match some given criteria, like internet search engines.
  • Tools to virtually construct nearly any kind of object; computer-aided design.
  • Ways to rapidly prototype virtual objects into real ones. Sophisticated, automated fabrication of a specification for an object, through “three-dimensional printers.”
  • “Cradle-to-cradle” life-spans for objects. Cheap, effective recycling.

(from Wikipedia)

This definition covers a lot of ground, and specifies aspects of not just the “things” in the IoT but also the “means” for those things (tools for design and rapid prototyping and fabrication – think RepRap and the like) and methods for dealing with the expected rivers of data coming from them. On that last point: the “OpenSpime” developer network (appears to be defunct) was created to “implement an open protocol for an open internet of things”, based on an extension of the XMPP messaging protocol. (I wonder what overlap, if any, there might be with xAP?).

WideTag has adopted this spime-centric view of the IoT, including a characterization into “Category 0” and “Category 1” spimes.

+++

IoT has some people worried, and may in fact cause a run on tin foil. The Internet of Things council casts the challenge of the age as “transcending the short-term opposition between social innovation and security by finding a way to combine these two necessities in a broader common perspective” and “It holds dangers, but it also holds promises” and “defensive, driven by design principles of control and fear and has in the past six years not been able to create much enthusiasm, on the contrary, it has sparked lots of defensive debates on transparency, privacy and fear mongering”. Besides wrapping your passport in tin-foil, perhaps merchants should proactively ‘blow the fuse’ on RFID tags when the sale is consummated, thus rendering the tag useless for future tracking?

ReadWriteWeb describes a possible future where countless individual pieces of information from your environment is recorded, transmitted, and fused into a larger, all-knowing panorama of one’s activities: “imagine a future where all objects are “social” data-collectors  who can report their use, their history, their location, etc. Now imagine the government or corporations accessing that data and taking action based on what the objects’ data tells them”.

As an example of what could be coming… how many optical gyroscopes can fit on the head of a pin?

+++

This RFID focus is a narrow, short-term view of IoT, based on my informal research. The longer-term view is harder to define: “Our future with the Internet of Things is still quite unclear. But initial glimpses of it can be seen through applications of RFID technology” (The Internet of Things Council).

So… a number of folks are thinking about a world where a critical mass of everyday things are self-identifying and perhaps can even sustain a conversation with you or your electronic delegate. In that future, our relationship with those things will be significantly different. Given that Twitter’s 140 character limit has set the bar here, it might not take much for an object to pass itself off as being part of a conversation of some kind, even if it’s being ‘followed’ only by other objects. We are already seeing Tweeting houses, buoys, and what not.

I think Social Node expresses it best:

“Over the next 5 years the web will rapidly spread into the world.  This will not necessarily require the abundant, cheap sensors typically referenced in conversations about The Internet of Things (which is more about direct object-to-object communication).  Instead, it’s more likely that prosumers will enrich rich virtual mirror worlds and then access them via geo-coordinates at home or on the go.  “

Which is what these three companies are enabling: the association of social content – photos, videos, etc - with specific, physical objects, through tags that you attach or otherwise map to the objects:

  • StickyBits: “A fun and social way to attach digital content to real world objects”, by mapping a bar code on something – a business car, a cereal box, a car, etc - to your content - a video, document, photo, etc. Someone comes along and scans the code, and ‘retrieves’ what you’ve left there.
  • Tales of Things: Proclaiming, “It’s a memory thing”, you can connect “anything with any media, anywhere”. Appears similar to StickyBits except using QR codes that you print on your own.
  • Itizen: “a place to tell, share, & follow the life stories of interesting things”… appears similar to StickyBits, except with custom tags that you buy or print on your own.
  • pachube: “Store, share & discover realtime sensor, energy and environment data from objects, devices & buildings around the world. Pachube is a convenient, secure & scalable platform that helps you connect to & build the ‘internet of things.” Cool mashup mapping devices from all over the world.

The “ELEARNSPACE” blog gushes about how this eventuality – social objects - will likely have a greater impact than social media (take that, Zuckerberg!):

“As more devices connect to the internet – cars, home security systems, utility monitoring – and as more objects include RFID tags, the physical world begins to merge with the digital world. I can search for my car keys the same way I search for a research paper. Social media is an overlay of socialization on top of our physical worlds. The internet of things is an integration of physical and virtual worlds, permitting the most desirable elements of each to exist in the other.”

Social Node points out that the resultant river of data will be a rich target for monetization:

“There is tremendous business, consumer, and social demand in place to incentivize these flows.  This pull force is getting stronger as we collectively discover new ways to unlock the value of this data.”

Which seems to be where WideTag, mentioned above in the splime discussion, comes in: a startup focused on an infrastructure for collecting and analyzing the river of data that’s expect to flow from all the IoT: “WideSpime enables the rapid and scalable development of dependable solutions based on Social Hardware and services. With the addition of WideSpime’s rich set of functionalities, your application’s adoption rate will soar!” (!)

+++

In this IoT space, an underlying theme of environmental action and responsibility is often implied or explicitly called out. For instance, WideTag’s tagline is “Realtime. Social. Green”; while I couldn’t find an explicit explanation on their site, I gather that their take is that “green technologies are going to be an exceptionally important application of widespread, bottom-up, environmental sensor technology” that is implied by an IoT.

That makes sense; if we can follow river levels via Twitter today, then tomorrow, via small wireless devices, could we be following Tweeting salmon (“Hey, who put that damn dam there??”) or glaciers (“Is it me or is it getting warmer around here?”) or ocean currents (“C’mon in! It’s a balmy 38 F!”).

(OK, silly, but you get the idea.)

On the other hand, it could be that the IoT is an intrinsically non-green activity. IBM’s SmarterPlanet initiative apparently projects that there will be 30 billion RFID tags extant at some point. Whether you believe that number or not, that’s a lot of ‘things’ being created and probably not recycled when we’re done with them. I wonder if RFIDs are “RoHS compliant” in the first place… are they even designed to be recycled?

And RFIDs are very simple devices that don’t include batteries and circuit boards made of exotic and hard-to-recover materials, as you’d expect with ‘smarter’ devices. So an aspect of the ‘green’ in IoT may be a proactive reflex to stay ahead of curve on the environmental footprint of the IoT. Note that in the “splime” definition, above, one metric or requirement was: ‘“Cradle-to-cradle” life-spans for objects. Cheap, effective recycling.’

IBM highlights a random list of case studies in the “Sustainability” section of their SmarterPlanet initiative… but it feels like they needed to fill in a marketing check-box.

I tried not to be cynical when I read what the folks running the Internetome conference had to say: “ what’s good for your organisation may well be good for the planet too.”

+++

It’s been interesting learning more about IoT. I’m sure there will be more to write on in future posts. My guess is that my near-term interest area will be on  ‘bespoke’ objects that are designed and built to function as 2-way connected devices or ‘Bots in the first place.

I will close with this thought (and just a couple of postscripts!): I think Adrianne Jeffries gets it right when she observes this:

IoT “got to be an overused misnomer even before the technology had a chance to become common”.

You think?

+++

Postscripts:

  • I have to admit that when I run across IBM “Smarter Planet” ads in magazines, etc, my eyes glaze over instantly, rendering me incapable of understanding exactly what they’re selling (which is really what it’s about). Similarly, their pithy taglines tend to leave me a little bit dumber every time I take them in:
    • “Intelligence – not Intuition – drives innovation”…  I really don’t know what that means, and if I did, I’m sure I wouldn’t agree with it. Would Edison have agreed with it? I think IBM’s point is that the average enterprise or organization needs to be “data-driven” in its decisions and planning, which requires the ability to analyze and view the data from many angles: “The most important aspect of smarter systems is data—and, more specifically, the actionable insights that the data can reveal.”
    • “The planet has grown a central nervous system”: Has it, really? Where’s the “brain”, then? I thought the internet was distributed and decentralized? Are we talking about Skynet here? What do they mean??
    • “Welcome to the Decade of Smart”. I guess “Decade of Smarter” sounded clunky. And do they know about Diesel’s new ad campaign?
  • I just realized that it appears that it’s the EU that’s apparently taking the lead in all of these IoT discussions. Did you notice all those “organisations”? Should I rashly leap to any conclusions based on this? Whatever it is, WideTag has decided to export it: “WideTag, Inc. has been founded by a team of experienced entrepreneurs who, having lived in Europe, Italy, are mashing-up the Silicon Valley’s startup culture, with Europe’s strong values, social responsibility, and design driven life.”
  • There’s a tangentially-related conference, “Fifth International Conference on Tangible, Embedded, and Embodied Interaction”, which seems to focus more on interactions with devices, etc: “TEI is the premier venue for cutting edge research on interaction with tangible artefacts and systems. We invite submissions of prototypes and daring ideas, tools and technologies, methods and models, as well as interactive art, interaction design, and user experience that contribute new understandings to the broad area of tangible computing, embodied interaction, interactive surfaces and embedded interactive systems.”
  • Even farther afield, and just because it sounds interesting, there’s also the “Smart Fabrics 2011” event: “The conference will cover topics such as the current status of innovative smart fabric technologies in the marketplace, as well as recent application breakthroughs and adoption. The conference will be of particular interest for people involved in electronics, textiles, medical, sporting equipment, fashion, and wireless communication industries, as well as military/space agencies and the investment community.”
  • On my IoT to-do list: Watch O’Reilly’s keynote on this topic. Get some of my own devices to show up on pachube.

What, no Moore’s Law for Batteries??

In a previous post, I wondered aloud:

“The implication of Moore’s law, along with implicit corollaries for energy storage technologies (batteries, capacitors, etc) – is there a law yet in Wikipedia for this??”

Turns out, there’s been some buzz about this question recently. As pointed out by Gigaom, Thomas Friedman, in an otherwise excellent piece in the New York Times (September 25, 2010) on the need for the United States to ‘drive’ an electric car program as aggressively as it did its own Moon Shot program in the 1960s, repeated an assertion that there is indeed a kind of Moore’s Law already in effect for batteries: “the cost per mile of the electric car battery will be cut in half every 18 months.” Gigaom correctly pointed out that there is no such “law” currently in effect.

Techies have been held in awe of Moore’s Law and its consistent returns for so long that it’s natural for them (us) to assume that every other hard challenge of science and technology would eventually be tamed and mastered in a similar manner. So far, though, battery technology appears to be immune to this romantic notion. According to Bill Gates, “There are deep physical limits” (also reported by Gigaom) when it comes to batteries.

Perhaps Moore’s Law is really an “assertion”. As originally stated, it referred to circuit density (the “number of transistors”), but some have shown that circuit performance has actually hewed the line as well, also doubling every 18 months. The difference between density and performance is significant, even though as things get more complicated, ‘performance’ may be hard to universally measure. As circuits have shrunk (as densities have increased), clock rates have also increased: more transistors to do more work per clock cycle, leveraged by more available cycles per second.

Why does Moore’s Law apply only here (and not to, say, battery capacity)? Is it because increasing transistor density “just” comes down to figuring out how to print circuit patterns onto silicon using increasingly shorter wavelengths of light, while “simply” managing to deal with the various weird physical effects that come into play when you’re working at nanometer scale, while designing the circuits for testability and robustness, etc? Can any of the physical techniques be applied to battery technology, to increase surface area, etc? I’d better stop here since I’m just guessing about this🙂.

Note that in the Friedman article, the metric was not battery capacity, volume, weight, or energy density but “cost per mile of the electric car battery. That’s surely an important metric to focus on now and not a bad place to start; my guess is that getting the un-subsidized cost of a 100 mile range electric car to drop below, say, $20k, would be an interesting milestone. The Nissan  Leaf, an all-battery vehicle with a 100 mile range, carries an MSRP of nearly $33k; after subsidies, Nissan estimates that the take-home price starts at $25k. At some point however, longer range will also become a distinguishing factor, and so battery energy density will become an important metric to track.

But at a higher level, the real metric is about the cost, density, or capacity of “energy storage”… there are other ways, besides batteries, for storing energy, such as: capacitors, flywheels, fuel cells, compressed air (yup), and spit (kidding!).

What does this have to do with ‘Bots? A lot. As energy storage technology (admittedly, mostly in the form of batteries) improves, resulting in smaller batteries that pack more of a punch (in terms of total energy stored and/or in terms of amount of energy that can be delivered in a given time period, or “power”), and/or can store energy for longer, then the set of scenarios you can envision with a self-contained connected device gets that much richer. Couple that trend with CPUs that can do more with less power, and with sophisticated ‘sleep’ modes, and you get even more leverage. Batteries that last 10 years are now commonplace… imagine a self-contained wireless device packing a battery that can power it, for, say 20 or 30 years… you’d start to think differently about the scenarios it would enable. You could, for instance, build them into semi-permanent structures: boot ‘em and ‘forget ‘em’… for security, maintenance, building health, and other kinds of monitoring (such as managing wilderness areas, tracking geologic events, and so on).

Those kinds of scenarios imply interesting requirements for the software that would power those devices, and the systems that would manage and monitor them. Unless you’re a programmer for a Mars rover, accustomed to not being able to reach over and hit the reset button at a moment’s notice, my guess is that achieving a world where devices go for 30 productive years possibly untouched by a human may take some work. It’s an interesting problem.

PS: Not to mention the environmental issues associated with the production of so many batteries and the challenge of recycling them when their useful life is over, or related health issues – have you ever seen a ‘leaking’ dry cell and the damage done to its immediate environment? (Added 6 October 2010:) And then there are the social / policy / privacy issues. More on that later.

PS: Watch any of the videos on this site (this one is my favorite) and ponder the technology advances that have made this form of ‘connected device’ commonplace, and a platform for frenzied experimentation and innovation: small, cheap, lightweight, powerful batteries, sensors (accelerometers, gyros, pressure sensors), cameras, compute devices, motors and associated electronic controllers, servos, GPS modules… all integrated and leveraged by sophisticated software.

‘Bot Trends, or, “How Did We Get Here?”

So, what did a ‘bot geek do for fun hundreds of years ago?

My guess is that they hacked around with clocks, especially if the money was good. The first accurate, portable timekeeping devices were made possible through extreme cleverness and a willingness to consider alternate perspective on the part of their inventors coupled with advances in metallurgy and other technologies.

It’s interesting that those first watches were rather small; Harrison‘s “H5”, which, like its predecessors H1 through H4, took years for him to construct (early 1770s), was designed to fit in one’s pocket, and was accurate to one-third of a second a day… which is likely more accurate than today’s average cheap wristwatch.

Fast forward a couple hundred years, and we’re faced with the spectacle of the “Spot Watch” , which sports a CPU, memory, display, and an radio that receives data over an FM sub-carrier. Via the 1-way data link, it can keep its internal clock synced with that of the larger cosmos, but more interestingly (to the geek), applications can be downloaded to it for local execution. All in a package that fits on your wrist.

You may ask yourself, How did we get here? Well, as you might expect, Moore’s Law, plays the leading role on many fronts; it’s also why the phone in your pocket likely has more compute and memory power than that of several Apollo launch vehicles combined.

All this is well and good… it’s been fun riding the wave of shrinking-but-more-powerful computing devices.

But what I think is especially interesting now is that additional trends have coupled into Moore’s Law in a sort of Geek Perfect Storm, opening up what feels like a whole new frontier for those who play with ‘bots. Building on the foundation of Moore’s Law, we have:

  • New capabilities in the form of sensors, displays, GPS functions, and sophisticated 2-way wireless systems are entering the hacker mainstream at lower and lower price points, and continuing to drop in price from there. For instance, you can buy a hobbyist-friendly GPS receiver for $20, a multiple-axis accelerometer or RFID receiver for $25, a color display similar what you’d find on a feature phone for $15, a basic wireless transmit/receive set for $10, or a sophisticated wireless mesh network (based on the ZigBee specification) starting at $25 a node. So, for under $100, you could put together an interesting gadget, perhaps controlled by a $20 Ardruino compute module. (Prices pulled from here.)
  • High levels of chip/function integration that has driven costs down for the average consumer ($29 DVD player anyone?)  have also benefited the ‘bot geek. For instance, you can buy a module that looks like an RJ-45 socket for your ethernet cable, but it just so happens to implement a TCP/IP stack and throws in a general-purpose Linux operating system environment for your apps for good measure. Or, there’s the 3.2″ color touchscreen which includes a general-purpose computing environment, a number of I/O pins, a speaker, an SD card slot, and the ability to read FAT files… for $80, and in a package not much bigger than your computer mouse.
  • The Rise of The Maker: “Maker”, a term popularized by O’Reilly’s “Make” magazine (not to mention Danial Lanois), describes a philosophy (and perhaps a cultural belief system) wherein a high value is placed on the practice of taking things apart to learn how they work, how to make them better, how to re-purpose them to serve new ends or just for fun, and keep them out of landfills in the process. There have always been “Makers” - I remember some old guys in the neighborhood where I grew up who would scavenge radios and TVs from the curb on trash day. But while Moore’s Law brought prices down and complexity up, it also meant that when you tear into your PC’s dead DVD burner… there’s practically nothing inside for you to mess with. It’s not very satisfying… unless, of course, you pry off that laser and find a way to burn stuff with it. Makers assert that manufacturers should design their stuff to be more open, to encourage repair and hacking. At some point, this same crowd gets around to building new stuff, perhaps atop the old stuff, and turns to the kind of cheap hardware and easy software integration mentioned above to pull off their exploits.
  • The Role of Open Source and Community: Software developers are familiar with the value of “Open Source” software and the associated communities of developers. If you’re able to build new software by leveraging existing, debugged, and community-supported modules, the overall velocity of your project increases, as well as the value of the community if you donate it back. Well, the concept works with hardware also. Example: in the hacker space, there’s the Arduino, “an open-source electronics prototyping platform”. Multiple vendors offer Ardruino-spec’d compute modules with standard connectors and add-ons (“shields”). This helps sustain a positive feedback loop (sometimes assisted by “Hacker Spaces”, such as “NYC Resistor“, or events such as “Maker Faire“) in the form of a fervent community who are happy to help newcomers bootstrap their own crazy ideas, and vendors who sometimes adopt them into new products of their own, or at least support interoperability (surprising example here). The resulting leverage is amazing; you’d be surprised by what you can throw together in a weekend. The ultimate expression of this is an open source ‘bot that can “print” in three dimensions, and can thus make arbitrary things (potentially even all of the parts necessary to make a copy of itself). Such devices - such as the RepRap or the MakerBot - can be driven by open source designs found in community-driven libraries such as ThingVerse.
  • The Magic of (high-level) Software: No more assembler! For instance, the Ardruino project includes a sophisticated software development environment and run-time libraries. If you’re a beginner gadget Geek, you’re going to move a lot faster if you can write in a high-level language similar to one that you probably already know, and leverage run-time libraries that abstract away the details of interfacing with the hardware.

We might best identify the Ham Radio operators from a generation ago as ancestors to today’s Geeks / Makers (and perhaps these guys, also). Ham Radio operators had a strong sense of community which encouraged the sharing of designs for rigs, antennas, and exploits (extreme distance, lowest power, video, etc). They used their own medium - Ham Radio - as the basis for their community. It was probably a lot of fun to communicate with someone on the other side of the world using a radio you build yourself.

Similarly, today’s connected devices seek to be part of the internet, which is the largest (and most chaotic and distributed) device in its own right. The internet is also the foundation for the very active device community… so, as with Ham Radio, the medium and the community are the same. Perhaps more so, since today a connected device can host its own web site and potentially actively in the community that created it.

The upshot of all this is that it’s a great time to be thinking about connected devices, for fun and profit.

What’s a ‘bot, anyway?

Well, “I know one when I see it“. More helpfully, I’d say that a ‘bot is a widget, gadget, or device that is ‘connected’ to the outside world and can say something about its own state, or respond to commands. Thus, the difference between, say, a nutcracker (a gadget) and an iPod Touch (another gadget) is that the latter can run code and communicate with, say, a web site.

A PC is a ‘bot, but the focus here is on special-purpose or single-purpose smart devices that can run code and communicate to the outside world. Examples include: smart thermostats, your car (in the not-too-distant future), and MER-A and MER-B, much better known Spirit and Opportunity.

Gadgets are much more interesting when they’re connected to each other and, perhaps, to occasionally-controlling computers,perhaps because of Metcalfe’s law, whether in your house, car, or backpack.

The implication of Moore’s law, along with implicit corollaries for energy storage technologies (batteries, capacitors, etc) - is there a law yet in Wikipedia for this?? - is that we’ll be seeing more ‘bots around us, doing more on our behalf, at greater price efficiencies. In some cases, if these things are designed and deployed well, they’ll actually simplify things for their human overlords and may even seem to be performing magic on our behalf. Of course, given that they’re just so much hardware and software designed by those same humans, there’s a fair change they might not actually help things at all, either.

These are the topics I hope to explore a bit in these posts, with a mixture of examples from my own experiences and with any luck some high-level musings about what could be.

(6 October, 2010: updated for spelling)