Book of the Future Are you ready for tomorrow? Mon, 22 Sep 2014 08:55:32 +0000 en-US hourly 1 Society Will Continue to Accept New Technology Fri, 29 Aug 2014 12:16:39 +0000 Three quarters of UK adults have a smartphone. 24m of us log into Facebook every day.

Ten years ago a state of the art smartphone was a Handspring Treo 650. Suffice to say I was (nearly) the only one of my friends to have one of these. The closest thing to Facebook in 2004 was MySpace and Friendster. I bet your entire family didn’t share their daily comings and goings on those networks.

My point is that as a race we are capable of adopting new technologies very quickly. In just a decade we have becoming a connected nation, online all the time and comfortable communicating via completely new devices and forms of media.

Some might argue we have adopted these technologies too fast, without sufficient critical challenge. Certainly there is a Facebook backlash. The tech-savvy and privacy conscious have started abandoning this network whose owners choose to push the envelope of what is acceptable use of our private data, only rowing back when the chorus of criticism reaches sufficient volume. Revelations about the abilities of the NSA and GCHQ to tap our other digital communications has left many wary.

But in the mainstream the smartphone and the social network are now standard components of everyday existence. They are core features in an array of digital prosthetics on which we are increasingly reliant. Augmentations for our memory, sense of direction, and social interactivity are accepted. Normal.

This leads me to believe that the next wave of technology coming in will be similarly accepted. Today anyone sporting Google Glass gets a lot of attention. In ten years when smart glasses come free with your data contract and look more like this Kopin prototype? They’ll be as remarkable as an iPhone.



We’ll get over the difference factor but there remain issues to be resolved, as there still are with smartphones and social networks. Earlier this week I was talking to BBC Stoke about a local politician making the latest in a long line of Facebook and Twitter gaffes by public figures. Partly she was just daft (telling the world she once flashed her breasts to get out of a parking fine – she is the cabinet member responsible for transport), but partly she is operating within a fuzzy set of guidelines. An etiquette that is still evolving.

Cameras present one of the clearest challenges to this: how does society respond when everyone is sporting a camera and can shoot video or stills with just a thought or a blink?

I think we’ll adapt. When did you last hear about ‘happy slapping’? I phenomenon rose, became known, then socially unacceptable, and slowly disappeared back out of public sight. No doubt it still happens, but society has boxed it off as an issue.

There will doubtless be ructions when smart glasses become mainstream – they’re already happening in San Francisco. But we’ll adapt.

What might be more challenging is adapting to the increased social imbalances that this technology might bring. Smart glasses and their ilk are human augmentations, even more clearly than having the latest smartphone or PC. People with access to the latest technology will be able to do more than those without, and that is a clear – if not wholly new – issue of material wealth increasing privilege.

]]> 0
The Future of Food? Grow Your Own – Automatically Mon, 18 Aug 2014 10:22:31 +0000 A few years ago a large food producer commissioned some research on the future of food. It was a PR exercise, designed to produce some interesting, light-hearted stories. Instead what came back was a pretty stark message: we as families and individuals will all have to produce much of our own food because that’s the only way we can produce enough.

Needless to say, the research was never published. It wasn’t the sort of story they were looking for.

I couldn’t tell you who the producer was, even if I could accurately remember: I was told about it in confidence a few years ago and never saw an actual copy. This is not a piece of rigorously qualified information. But it’s believable in the current context.

The future of food production has become a hot topic. Today, globally, we produce more than enough food to feed everyone – enough for 12bn people according to the UN World Food Programme. But since much of it is fed to livestock (and because we let money get in the way of keeping people alive), we don’t manage to feed all 7bn. In a few years time the population is likely to peak around 9bn. Unless we produce an awful lot more food (twice as much by some estimates) or all cut the amount of meat we eat (the trend is going in the other direction as large economies like India and China develop), we are going to have even more serious problems feeding everyone.

Closer to home there are issues of food security and self-sufficiency. According to a report by the House of Commons Environment, Food and Rural Affairs Committee last month, the UK has become steadily less self-sufficient over the last twenty years. We now produce just 68% of the food that could be grown here – the rest is imported. Given the current levels of political instability, and the growing effects of climate change, it seems unwise to have so little control over feeding our own population.

In a local context, there are two questions to address: what we eat, and how we produce it. The former has all sorts of answers from the prosaic to the unpalatable (for some). The simplest answer is that we all go vegan, but the simplicity of this answer exposes why it won’t happen: human beings are creatures of desire more than logic. Even if some Californian fad for veganism spread to the entire Western world, it is likely that the developing economies would want their days of unfettered carnivorous gluttony just as they want their chance to experience the economic growth that fossil fuels provided the West. And frankly, it’s hard to argue that they should abide by different rules just because we screwed the world up.

We could continue eating ‘meat’ but in different forms: artificial, insects, etc. I think this will become a proportion of the mix and may eventually displace some livestock production (particularly beef, the most resource intensive). But it’s going to take time.

In the meantime we’re back to that question of production: where does our food come from? There seems to be a growing trend for grow your own. I’m not ahead of the curve on this: just look at the column inches and airtime devoted to gardening, or the waiting lists for allotment spaces. Check out the IncredibleEdible project in Todmorden or the guerilla gardens springing up all over the place. There’s even an app to help you grow and share produce.

But mainstream as this is (up to 5% of all fruit and veg is grown at home based on 2012 figures – the most recent I could find) it’s not at a level that will account for growing international competition for crops, changeable weather, and political instability.

For this to change, growing at home needs to be easier. Automated. Like an appliance.

This may not sound very ‘green fingered’ or organic. But the nature of our time-stretched lives these days (a cliché but a reality) and the fact that not everyone wants to garden, means it’s a reality if we all want regular crops of edible produce.

Imagine this: an indoor appliance the size of a washing machine that feeds, monitors and returns regular crops of salad leaves, tomatoes, herbs, brassicas and potatoes, and does so with a minimal use of water and electricity. All you do is plumb it in and feed it with seeds and nutrients every now and again. It could even monitor your fridge and change the production rate to ensure it only delivers fresh produce as and when you need to restock.

That’s the sort of thing that could become truly mainstream and account for a sensible proportion of our regular produce. And it’s entirely possible with today’s technology: hydroponics, LED grow-lights, cheap microcontrollers and cloud computing. It could be installed anywhere, even for those without a garden. Sure, it’s not as green as growing outdoors, but it is more reliable and less effort, and that’s what people want. And it’s certainly greener than shipping your vegetables half way across the world.

There’s another project to tackle then…


Note: After this post went out in the News from the Future newsletter last week (sign up here to get posts like this one early), a number of people pointed out that projects like the Urban Cultivator have already tackled this challenge. I’m now wondering if a broken dishwasher can be cheaply recycled into such a device…

]]> 0
Boosting Your Personal Bandwidth Tue, 12 Aug 2014 10:21:10 +0000 I’m tired. Sleep deprived. I was at 5live until 1am this morning, only to not appear. I’m not complaining: the death of Robin Williams was both very sad and understandably of greater editorial significance than my regular tech slot.

Being tired is not an unusual state for any of us these days: there’s a lot to do between work and life, partying and kids, sports and hobbies, family and friends. The problem is that when we’re tired we’re not very productive.

Productivity: Goal or Threat?

Productivity is one of those odd qualities, equally praised and vilified. Driven individuals are always seeking life-hacks to boost their personal productivity. But when productivity targets are imposed, it can become ugly and corporate, cold and forceful.

In my privileged position as a self-employed person making ends meet, who also loves their work, productivity is about personal reward for me. Financial, but more importantly, emotional. I value my own success more than the rewards it brings me. Always have (as some of my career choices will show).

This means I am very keen to improve my productivity. But I’m not a great fan of all the rules and methods that are meant to more usefully structure your working day. My work is varied and creative. Strict routines and patterns are hard to maintain and quite often I find they disrupt the natural flow rather than enable it.

Golden Moments

Maybe I’m just not disciplined enough, but I’m focused more on ensuring I make the most of those moments when my brain seems to be firing on all cylinders. These moments are rare, hard to plan for, and they don’t usually come when I’m sat at a desk. The first hour after I wake up is incredible. I regularly whip out my laptop from where it is stored under the bed and knock out a thousand words. That’s fine: my wife is very understanding about the screen glare and key tapping.

But what about those seconds where you can’t access some means of capturing your thoughts?

A couple of times on holiday last week I resorted to paper and that’s cool – though my family may not have noticed (I blame you, Plundernauts), I was actively trying to minimise screen time. But now I have to translate my paper scribblings (‘inky spiders dancing on a page’ is how one teacher described my writing) into something I can a) comprehend and b) use.

Paper Bandwidth

Paper is pretty low-bandwidth and low-fidelity for someone with my very limited graphical ability though. Co-operating on DIY projects with my wife is not easy when even my finest sketches show each component to a totally inaccurate relative scale. Digital devices aren’t always available or convenient either. I once wrote a thousand words on a smartphone on a particularly packed tube ride, but it’s not an experience my wrists would like me to repeat. Sorry for the mental image but I can’t capture Evernotes in the shower (even with voice recognition: I tried).

In short, I’m back to one of my personal hobby horses: for early digital natives like me (my school projects were done in Lotus Ami Pro), there is no higher bandwidth means of capturing our output than the keyboard and mouse. And using this means being seated, ideally at a desk, in a warm, dry and powered environment. We are not always in these environments when inspiration strikes or when our minds enter those incredible states of clarity that occasionally come over us. I want an always-accessible, truly portable, truly practical means of translating my thoughts into actions, products and plans.

Now this might sound pretty invasive.

Surely the smartphone has already turned us into an army of 24-hour workers, always connected to the corporate machine?

Well yes, for some people that is true.

Aren’t you the person who has argued for an ‘analogue week’ in order for us all to disconnect occasionally, re-engage with the physical world, and actually talk to our families?

That is true. But that doesn’t mean that outside that analogue week, I don’t want to be as productive as possible.

I’m pretty sure your family would like to communicate with you, without thinking you might be making digital mental notes about work. You’re bad enough at staying focused when there are any digital devices around.

Again, you (I) have me bang to rights. This will require an even greater level of mental and social discipline than we currently (often fail to) apply to the current generation of technology.

But…but… I can’t help but want it.

Visual to Neural

What is ‘it’ then? There aren’t many great candidates today. However accurate voice interfaces become they are frankly anti-social. It’s bad enough being on a train full of people chatting to their friends, family and colleagues, without adding a load chatting to their machines as well.

Touch and gestures? I suppose some form of learned signing could work, but that ties up your hands, and again it’s like to lack bandwidth. Some people can text at an incredible rate but not faster than they can on a full keyboard. I want an all round improvement.

Neural interfaces? That seems like the obvious route. But these seem to be so far away. The commercial options today are largely limited to binary options: yes and no, left and right. The most sophisticated medical devices in trial might allow the control of artificial limbs but even this incredible feat is a long way from capturing complex thoughts and language.

For the time being if I am going to maximise my personal productivity and take advantage of those moments of insight I’m going to have to do it with today’s technology and the physical interfaces I was born with. Just utilised flexibly at the times that inspiration strikes.

]]> 0
Frugal Innovation and the Maker Movement Tue, 29 Jul 2014 11:30:09 +0000 Charles Leadbeater has a new book out. Leadbeater, writer, political adviser and all round big thinker, has turned his attention to the forces of innovation outside of Silicon Valley. Those inventors, activists and entrepreneurs who operate in a world of constraints as opposed to bountiful capital and light touch regulation. What he calls ‘frugal innovation’. I haven’t had a chance to read the whole book yet, but as usual the RSA podcast is a good start.

The idea of frugal innovation seems particularly relevant having spent the weekend exhibiting at Maker Faire. As regular readers will know, I like making stuff and most recently have been building a home automation system using open source hardware. This is partly for fun, partly because I’ve always wanted a smart home (Tony Stark envy), and partly as an experiment to show what’s really difficult about these things (the user experience design, in case you’re curious).

When I found out Maker Faire was running again this year at the Museum of Science and Industry, I decided to take a stall and show off my work. If you haven’t been to a Maker Faire, think of it as show & tell for grown-ups. There are some stalls there where people sell things, but mostly it’s about sharing what you’ve made (and learned) with other people, both fellow makers and members of the public – lots of them young, which was great. The kids loved playing with Jock the RoboRaspbian, my Raspberry Pi-powered, web-controlled toy robot. And the adults generally liked the idea of keeping an eye on their homes remotely, and being able to cut their energy bills by turning off all the lights their kids leave on.

Maker Faire Manchester

Maker Faire Manchester

The question I was asked most often was whether I planned to commercialise the system. The answer is always ‘no’. For a start I’m sworn off more start-ups for now – at least ones that aren’t connected to my core business. I don’t think there’s a lot of money to be made from the system I’ve built: Samsung, Apple etc have the manufacturing capability and supply chain to do things more efficiently and at greater scale than I ever could. But most importantly what I have built depends hugely on the work of others – something that was true for all the makers I spoke to at the Faire.

The software that sits in each of my home automation nodes is heavily based on ‘RESTduino‘, a project with multiple contributors, who have given their work to the community at no charge. The web platform uses libraries for various functions like talking to the nodes, graphing the data, and communicating with the energy monitoring system – all written by others and given to the community at no cost (‘open source’). Even the hardware I’m using – the Arduino – is open: anyone can replicate its design without licence fees.

All of this means it would be a complex affair to try and scale what I have built up into a profitable business. But without it I wouldn’t have been able to build anything at all – or if I could, doing so would have taken ten times as long and cost ten times as much.

In Leadbeater’s last book, We Think, he looked at mass creativity as exposed on YouTube and other social channels. The Maker Faire showcases a more physical level of mass creativity, enabled by the open sharing of different hardware and software components. Every Maker takes those components and builds something uniquely their own, to fulfil their particular needs (or wants). Generally they then share the new components they have built to bridge the gaps back to the community, and the process continues.

As access to these components, and the ability to replicate them, is increasingly commoditised, it will be interesting to see what effect this has on the concentrated innovation of the Apples and Samsungs of this world. Imagine you can search a database of products and systems for a solution to a problem/challenge you are facing. You find a design – of software or hardware – that appeals, and then render it out, either as an installable application or a physical product through a 3D printer and some purchased components.

This exists (to some extent) today in Thingiverse, it’s just not widely used by the average consumer yet. But in just a few years we might all be sharing, or consuming, each other’s frugal innovations.

]]> 0
Intersection: Where Macro Trends Meet Your Market Mon, 28 Jul 2014 11:30:19 +0000 Futurism is a practice with an increasing level of professionalism and process. Futurologists, trend forecasters, and strategists use a variety of different methodologies to understand what’s happening, filter the noise and try to inform and qualify their predictions.

At Book of the Future we have created our own approach that we call Intersection, to allow us to make practical predictions and inform the advice we give to clients. Specifically it is designed to help us understand and demonstrate how macro trends, related to or driven by technology, will impact on the specific sectors our clients are working within.

3D Lens

The process starts with our ’3D Lens’: we believe that in a specific place (the UK) and over a specific period (the next twenty years), technology will be the biggest driver of change. This is predicated on the simple fact that within the time and space boundaries specified, we are expecting only steady, linear change in the other classic PESTLE factors – Political, Economic, Social, Legal, Environmental. By contrast technology is advancing at an exponential rate, as described by Moore’s Law, and this advance is touching every area of life and work.

Of course there is a small chance that we will have a revolution or a massive natural disaster, or other shock event in the UK in the next twenty years. One that may have an enormous impact. But our role as futurists is not to try and second guess the un-guessable – the ‘black swan’ events. It is to help the organisations that we work with to adapt to the visible future, the one that we believe will define the macro picture and that is already defining it today. As William Gibson said, “The future is already here, it’s just not evenly distributed.” We operate in the small pockets of the future that are here today and we help to expand them to encompass our clients.

Primary Trends

Within the scope of our 3D ‘Lens’ we break the primary technology-driven trends down to five core areas:


Put simply, things happen faster. Rich data moves at the speed of light around the world. Financial transactions take place so fast that they can no longer be handled by humans.

This changes businesses: there’s no value in six month old data when someone else can supply it real-time.

And it changes expectations: consumers and business users alike are rapidly frustrated by anything but an instantaneous response.


If there is a technological solution to a problem and it doesn’t cause grievous social harm, then someone will probably implement it.

If there are legal, environmental, social, financial or technical barriers that need to be overcome, then they likely will be and sooner than you think.

Technology has been shrinking in size and cost, and growing in power and usability at an exponential rate for decades.

This trend will continue to the point where technology is near-invisibly integrated into the environment around us and we are not always aware when the capabilities we are using are ‘normal’ human, or augmented by technology


Business success in the past was often characterised by the ability to optimise processes, supply chains, prices.

This retains value, but as models, channels and demands are changing faster and faster, success is increasingly defined by agility: the ability to enter and conquer new markets and opportunities fast.

This affects the structure of organisations: rather than slick, vertically integrated monoliths they need to be stratified into loosely coupled layers.

Each layer interfaces with the other but might also interface with third parties, offering its thin layer of optimised service as a building block in other people’s value stack.


Technology has lowered the barriers to market entry. The capital costs of a start-up, barring any physical stock, are trending towards zero.

This means more players in any market but also more models, and more channels.

There won’t be a single paradigm in any industry any more: competitors may supply the same products or services in different ways.

Likewise, many and various sub-cultures and micro-markets will exist on a variety of standardised, open platforms.


There is a growing global community that exists outside of national borders. They share an increasingly common culture, albeit coloured by local norms.

Social networks now capture a huge proportion of the global conversation, just as digital media services ensure the wide spread of common cultural reference points.

The last step towards true globalisation will be brought about by the ease with which products can now be moved: as data.

The rules of manufacturing and the supply chain are about to be radically re-written by hyper-local, automated manufacture, placing a huge emphasis on the ready supply of a variety of basic feed stocks.

Market Impact

With each organisation that we work with we look for the market impact of these macro trends, and we focus on areas of stress that already exist in the business. These are often surprisingly easy to find, at least when coming in with a fresh perspective, and appear in all areas of the business. New competition, changes in regulation, rising materials or labour costs, poor customer or supplier engagement, falling budgets or margins, breakdowns in compliance. You can usually find some of these in the industry press before you even start interviewing members of the organisation.

For each stress that we find, we test it against the macro trends and look to see whether it might be mitigated or exacerbated. We try to estimate the scale of the potential impact. Sometimes this is very hard, sometimes it is quite straightforward: if a formerly physical product can now be delivered digitally, the supply chain costs are likely to be orders of magnitude smaller.

Once we have assessed the market impact we can begin to rank the intersections and focus on those that will have the greatest effect. In reality the solutions we design around these intersections are often structural changes that will help to address others as well.

Narrative vs Empirical

You couldn’t call this process scientific. There is no repeatable experiment. Different people following the same methodology might achieve a different result (though we are trying to formalise the process within the organisation at least, so that there is consistency across future engagements). But the evidence from our interactions with clients over the last 18 months is that it undoubtedly valuable.

Feel free to use the information this post to try to replicate the process in your organisation. Or if you’d like some help, you can always drop us a line.

]]> 0
Breaking Band: Building a Better Internet Infrastructure Wed, 23 Jul 2014 09:43:29 +0000 If you follow me on social media, you will know that my broadband is down. It has been since the storms on Saturday.

I have no problem with my service going down. We’re going to have to get used to crazy weather over the next few years, and this lightning storm was like nothing else I have experienced in the UK. Lightning hit something, or water got somewhere it shouldn’t and things went down.

(Sh)It happens.

The problem is what happens next. Four days of wrangling with poor information, idiotic ‘customer service’ scripts, and under-equipped call centre staff insisting the problem is with my third-party router (only installed because the supplied one was utterly unreliable). Broken promises and repeatedly missed (self-imposed) deadlines. It took a concerted effort on Twitter to make something happen. That something is an engineer who has to come to my house today and work back from there, despite multiple customers in my area being simultaneously taken out (apparently not enough to justify it being called an ‘outage’).

There is no way this is an efficient way to run a business. But based on the chorus of recognition I’ve had across Facebook, Twitter and LinkedIn, my experience isn’t unusual for BT customers.

The Fourth Utility

Me whining doesn’t make for a great blogpost though and that’s not what this is about (though the scripts of the conversations with some of the call centre staff are pretty amusing). My point is that the internet is the fourth utility and it needs to be treated as such.

In the days when email and the web were the only things carried over the internet, it was annoying but not the end of the world if it went down. After all, for the first few years of consumer internet it was usually down: you had to dial up to get access.

But today the internet carries much more than these intermittent services. Across entertainment, environment and security it has become a platform in its own right on which we are increasingly reliant.

Without the internet I can’t access services that I pay for, like Netflix and Spotify. This adds to the ‘lost value’ of it being down.

My Nest thermostat can’t communicate with the world to find out what weather conditions it should be responding to. And I can’t communicate with it to turn it on and off, potentially reducing my comfort and costing me additional money in gas.

I can’t monitor the cameras covering parts of my property, or get alerts from my home automation system about intruders, floods or fires.

These are all what you might call ‘first world problems’ today, and I know I’m in the minority as a user of all these services. But they will be commonplace before long. And I pay good money for my internet platform to be able to use them.

Connected Age

We are moving to an age of ever-increasing connectivity. Some might decry our reliance on machines and their interconnection and I understand their concerns. I always think about the overweight chair-bound slobs in Disney’s Wall-E, beholden to their robot servants. But history suggests that technological advance usually drives life improvements, and human beings find ways to mitigate the risks they present.

If we are to continue the pace of technological advance, and retain our place as one of the more technologically-advanced nations, then we need to change the way we treat broadband provision. We need to stop looking at it as a luxury and understand it as a utility, and frame policy and provision appropriately.

At the moment there is far too little competition at the right levels of the market. Having most providers beholden to BT’s Openreach infrastructure does not drive innovation. The special deals that the government has with the big providers (BT and Virgin) to discount the tax they pay on their cables prejudices the market against new entrants. Regulation discourages the opening up of access to existing assets to allow the sharing of ducts, poles and other routes by providers, utilities and transport companies.

If my connection goes down in a few years time I would like my provider to know before I do and tell me. I’d like them to start the diagnosis and repair automatically, before I have called and without any human intervention. If I’m unhappy with their service I’d like to know that there are genuine, physical connection alternatives for my service, not just a re-branding of the same pair of wires.

Ofcom has highlighted that our broadband provision is some of the best in Europe. I’d agree with the FSB: it’s not good enough.

]]> 0
Emerging from the Colossal Cave Thu, 17 Jul 2014 21:21:32 +0000 This post is based on the script from two presentations I gave this week at Creative Kitchen in Liverpool, and Tameside Together in Manchester. You can see the presentation, built using Impress.js, here.


You’re familiar with Moore’s Law, right? Coined by Intel co-founder Gordon Moore back in 1965, it suggests (based on the evidence Moore had witnessed back then) that the number of transistors that can economically be put on a silicon chip doubles every two years. In other words, your computing bang for your buck has been growing at an exponential rate for nearly fifty years.

This law, and a number of parallel laws about the speed of our digital connections, and the amount of stuff we can stick on a hard disk, have described the technology revolution. Devices getting progressively smaller, cheaper, faster, better.

But I think they miss a vital component of what has changed about technology: it has become more human.

I’ve often described early computing experiences as like travelling to an alien planet. The machine you interacted with was a giant monolith, housed in its own environment, speaking its own language and using its own customs.

Now I have a new analogy.

One of the first computer games I ever played was Adventure, or Colossal Cave, as it was otherwise known. It was on a BBC Micro. I was a bit young initially, but I have vivid memories of my dad and cousin getting quite into it. Colossal Cave was a swords and sorcery epic delivered entirely through the medium of a text interface. ‘Choose Your Own Adventure’ or ‘Role & Play’ novels (perhaps also only familiar to people of my particular vintage) on the small screen.

Now I can’t remember the exact plot or characters of the Colossal Cave, but I imagine, somewhere in the depths of this cave, you may have found an ogre. This, for me, is the early computer. Giant, hulking, slow-witted and inhuman.

A little closer to the surface, and a little more evolved, you may find an ork. These are your pre-GUI personal computers. Communication is easier, but they’re still dim and gruff.

Then come your goblins, smaller nimbler and more able to interact. Laptops with graphical interfaces, and even access to the Internet.

Today the elf-like smartphone is all-the-rage. Slender, attractive, and much closer to human in its abilities to interact with touch, motion and voice.

But the elf remains at the edge of the cave. It can look out into the light and shout to us, but it can’t influence our physical world. Without some form of prosthetic there are hard limitations on its reach and strength.

The history of computing over the last half-century for me is one of evolution. Of computers evolving towards a state where their interactions with us are not limited to the screen, and instead they can communicate with us on all the levels that we communicate with each other, and change our environment around us.

As designers and coders we have for years been shining a torch into the Colossal Cave, briefly illuminating the intelligence inside so that we can interact. Now is the time for the computers to emerge from the cave and begin to communicate with us on our terms. But they need our help to do so.

Stepping back from my well-stretched analogy for a minute, there is good reason for us to help.

Do we really want to interact with data via a screen? Even the loveliest high-resolution, touch display, is an artificial environment relative to the majesty of the world around us. It’s also incredibly low-bandwidth. Think about the breadth of senses you have, through which your brain manages to process information, microsecond by microsecond. Why limit ourselves to interacting over a few million pixels when such rich experiences are available to us?

Computers now have so much data at their disposal, and the intelligence to process it, that we can let them be autonomous. The screen and keyboard was created when we had to manually provide them with all of their inputs, all of their instructions. That is no longer the case. Computers can make decisions based on time, date, weather, environment, location, your social graph and any number of other data points. Why bind ourselves to manual control when they are capable of taking on tasks we no longer need to do?

There is a challenge here though. More than one in fact.

The first one is the age-old sci-fi question: should we? Should we give them this much power? Should we leave behind manual labour? What does it mean for jobs?

The answers to these questions are book-length in themselves, but I’m inclined to think we should accept and even encourage this next step in technological progress. For the simple reason that there are more challenges for human minds to tackle. Why not hand the problems we have already nailed over to machines, if they can solve them more efficiently?

The second challenge is around ‘how’. Because for all my bravado and optimism above, this stuff ain’t easy. Or more specifically, the user experience design challenge isn’t easy.

Here’s an example. I’ve been building my own home automation system, as I have documented on this blog. This is both fun (if you’re a geek like me) and a serious experiment: I’m using the smart home as a small scale model for the smart city. The basics are simple: a few hours, a few quid and some cobbled-together code gets you a system that measures all sorts of environmental variables and allows you to trigger electrical devices in response. But as soon as you start trying to design the user experience, it starts to get really complicated.

Take a simple lamp. I want lamps to come on if it’s dark and when there’s someone in the room. And more importantly, turn off when the room is empty, saving me money and cutting my carbon footprint. You’d think the rules for that would be pretty simple, and they are until human behaviour gets involved.

Because sometimes we like it being dark. When we’re trying to sleep, or get a little cosy on the sofa to watch a film. When the house keeps turning the lights on in those situations, it gets pretty annoying. So what do you do? Create modes? Change behaviour throughout the day? Have a manual override? All of these things are possible but what you realise is that the number of permutations is enormous: automating response to human preference is really hard.

This is why we need more people from the creative and digital industries to start experimenting with physical computing. Sure there are a few forward-thinking agencies playing with wearables and microcontrollers. But think about how many websites are produced each year. Imagine how fast we could change our environment and our economy if we produced even a fraction as many digital, physical devices.

There is a particular opportunity here in cities with a manufacturing heritage (and often a surprisingly strong living industry), and a more recent digital scene. Manchester and Liverpool are the two places where I’ve been spreading this message this week.

It’s a simple message and not a particularly original one, but I hope I have carried it to some new audiences. Computing is emerging from the darkness of the cave. Now is the time to greet it and introduce it to our world.

]]> 0
Smart Cities or Home Automation: It’s All About UX Mon, 16 Jun 2014 05:27:03 +0000 When you walk into my utility room, the lights come on. If it’s dark. The latest evolution in my long term project to make my dumb (but very pretty) old house smart is a rules engine. Every time I step in and my way is lighted, my heart leaps a little. It’s a simple thing but it entertains. I’m still getting used to not having to turn the lights off though.

In the utility room and my office, both rooms down in the cellar, the rules are pretty simple: it’s always dark, you always need the lights on. But as I’ve started thinking about rolling these rules out across the rest of the house, I’ve realised things aren’t going to be so simple elsewhere.

Take the living room. Imagine you have a similar rule there: if it’s dark, and someone enters the room, turn on the lights. Great. But then you want to turn the lights down, get cosy and… watch a film. What then? Every time you turn the lights off, the system turns them back on again.

So you start to get into conditionals: if someone turns the light off, leave it off. Then you come in the next night and stub your toe because the lights don’t come on.

Maybe you put it on a timer to reset. Maybe you have a series of programmable ‘mood’ macros that set the rules depending on what you’re doing at the time. The point is not that there aren’t solutions. It’s that they are inevitably complex and need thinking about. They need designing.

Lessons from Santander

Demonstrating points like this in practical terms is part of the reason I started Project Santander, my home automation project, named after the home of the European smart city project run by Telefonica.  The home is a good proxy for the city, where these design issues are only magnified.

Instead of having to satisfy my wife and children’s understandable demands for a house that doesn’t frustrate more than automate, imagine having to satisfy a whole city of people. This is the challenge facing the mayor of Santander and the team working with him from Telefonica and the University of Cantabria.

As I’ve highlighted before, the hardware challenges of Project Santander were (relatively) straightforward. I now have nodes around my house collecting temperature, humidity, light level, and presence, and allowing me to control things like the lights above. These nodes are not dissimilar to what’s being used in Santander. Yet they cost me just £10 each and were built entirely from off-the-shelf, open source, hardware and software. In Santander they rolled out 20,000 sensors for just EU1m – their pricing is not an order of magnitude different to mine.

These 20,000 sensors generate relatively little data to transfer and store: when there were 12,000 sensors in the city, they were storing just 5MB of data per day. This makes sense: a temperature reading can be stored as a single byte of data – potentially less. Do the maths:

  • 1 byte per reading
  • 60/5 = 12 readings per hour = 12 bytes per hour
  • 12×24 = 288 readings per day = 288 bytes per day per sensor
  • 12,000 x 288 = 3,456,000 bytes per day = 3,375 kbytes per day = 3.29 MB per day

Bear in mind the temperature probably doesn’t change every five minutes. And even if it does, you can do some smart things to store many fewer readings than this, or store them more efficiently. Likewise with humidity, noise level, light, parking space usage. This is what accounts for the fact that many nodes have more than one sensor. This stuff is not particularly ‘big’ data.

So it’s cheap and easy to collect, transmit and store. What’s the challenge?

Making it useful. Making it intuitive. Making it human.

This is a user experience (UX) challenge. A design challenge. And it’s an opportunity that I don’t think that much of the tech community  - particularly those with the biggest skills for and focus on user interface design – has yet grasped.

]]> 2
Stop Redesigning the Web. Start Redesigning the World. Thu, 22 May 2014 07:44:29 +0000 Friction ignites innovation.

The examples are all around us. Take banking. I tried to send money abroad this week. Urgh. A thoroughly 20th century experience, replete with long acronyms, complex codes and lots of cost – 25% of the amount I was trying to send.

It’s no surprise that finance, banking and payments are hot spaces for innovation, from crowd funding and peer funding, to merchant systems, to whole new currencies. Organisations like PayPal, iZettle and bitcoin are tackling a staid old system that has failed to move truly into the internet age.

That age is characterised not just by technology but by culture: openness and sharing, of hardware and software, interfaces and protocols. A culture of speed and action.

Increasingly that culture is moving out of the digital world and into the physical. The new hardware categories – wearables, smart home, 3D printing – are much more open than their predecessors. If the hardware itself is not an open design, based on off-the-shelf components, then the software usually offers an API. Having had my Nest smart thermostat installed, I can’t wait to start playing with its API, integrating it into my own home automation system.

That system, Project Santander, has itself been built on these internet principles: rapidly prototyped using off-the-shelf hardware and shared software components. The most challenging part of its construction? The core software.

This software has been created using the simplest of web technologies: PHP, MySQL, HTML, CSS, Javascript. Because I don’t know anything else – in fact I barely know this. As I will happily concede to anyone, I am no coder.

Imagine what you can do with greater skills. Imagine the problems you can tackle. For me what is exciting about the ‘Internet of Things’ is the application of all those internet principles and skills to physical world problems. Skills of design and code that used to be confined to tackling problems in the virtual realm can now be applied to the physical. Energy, safety, health, fitness, food, education, and much more; the possibilities are endless.

This theme has a particular relevance and resonance in Greater Manchester, a place where great leaps forward in the science and technology of the physical world and the virtual have been made. Officially there are 45,000 people in the digital and creative sectors in Greater Manchester.

Imagine what we can do with the world if our digital skills are increasingly applied to physical problems.

]]> 0
The New Digital Divide: Makers and Consumers Wed, 14 May 2014 10:10:08 +0000 I have a new laptop, at least for the length of this trial. The team at Dell have loaned me an XPS and I have to say it’s flippin’ awesome. OK I don’t have to say that – it wouldn’t be much of a trial if I did – but it’s true.

In between trials of new machines I operate on a five year-old desktop or a six year-old laptop. Both are perfectly functional but limited. The laptop performs admirably for its age, thanks to a lightweight Linux OS, but unfortunately its frame is anything but lightweight: more luggable than portable. The desktop is very comfortable to use with its big screen and a posh mouse and keyboard (thanks to a never-ending trial from Logitech). But a lack of RAM means it becomes a little ponderous when running lots of Chrome windows or anything else taxing.

By contrast this new laptop has everything: slender metal frame, Core i7 processor, buckets of RAM, and a battery that lasts so long I’ve stopped bothering to carry the charger. Even if I use the laptop to charge my phone and other devices it seems to get me through days of work.

I can’t definitively say this is the best machine out there for the money – it’s not that sort of test. But its sheer capability has reminded me of something: the dramatic difference that remains between a ‘real’ computer and a tablet or smartphone. For me this is an increasingly important frontier in the digital divide.

Makers and Consumers

Because I’m using this machine as my main device for the period of the trial, I’ve had to install my regular software stack on it. I could automate this process and probably will in future, but it’s actually quite interesting to install things as and when the need arises. It makes you very aware of the software on which you’re most reliant. In the past this approach has also made me very aware of the (un)availability of an internet connection when you need one. But on my second trip over the Pennines in as many weeks, I find myself happily downloading hundreds of megabytes of software over Three’s 3 and 4G networks. There’s a reason I have an unlimited contract…

The software I have installed started with a browser or two: Chrome (for browsing, mail and apps) and Firefox (for web design and testing). Then a text editor (Bluefish) and version control (Git). Then an office suite or two – LibreOffice and MS Office. And finally the Arduino IDE for more development on my home automation system and robots. I’ll probably add GIMP and Inkscape at some point but I haven’t needed them yet.

Now, browsing I could do on a tablet. Email too. I’m pretty adept at typing on a screen and have a nice dinky Logitech (again) keyboard for my iPad Mini. But code? Version control? Spreadsheets? Document design? Presentations? None of these are things I would like to tackle on a tablet today. For these things a laptop or desktop is ideal. In fact, they are necessary.

Consumption not Creation

Tablets and smartphones today are tools of communication and consumption, not creation. There are two reasons for this. Firstly, the interfaces. Ancient though it may be, the keyboard and mouse combination remains our best interface to most of the tools of digital creation.

The exceptions are audiovisual: tablets can competently capture audio, video and images, and using a stylus designers and artists can draw on them. But for most other tasks the touchscreen interface lacks fidelity: even if you can capture your words, manipulating the documents you’ve written is a massive PITA.

This is not just the fault of the screen and fat fingers: the user interface trades off capability for ease of use. This is the second reason that touchscreen devices are limited. The operating systems and apps that sit on them are designed to be incredibly intuitive and usable with a few touches and swipes. This is great, but it means they are usually simplified to some extent, leaving you without the power and control that you might be used to on a desktop or laptop. The power and control you need to be a true digital creator.

The Real Digital Divide

The digital divide is generally accepted to mean the gap between the connected and the unconnected. Those with and without access to the internet. Today in Britain 83% of households have some form of internet access, with the majority of those that don’t reporting that it is lack of need/desire that stops them, rather than a lack of finance or skills. A large proportion of those are over 75. In short, over the coming years the digital divide by this measure is likely to narrow significantly, leaving a hard core for whom skills, disability and cost are the issues. These are issues that can and should be tackled, since it is increasingly hard to navigate modern life without internet access. Not having access can put you at a significant disadvantage from a consumer perspective, as much as anything else: things bought online are often cheaper.

With my futurist’s hat on though, I am more concerned about a different digital divide. That between digital consumers and digital creators.

Putting a connected tablet or smartphone into someone’s hands and equipping them with some basic skills may enable them to participate in digital life. They can use eGovernment services, shop and ‘join the conversation’ on social media. But they can’t make an awful lot – at least not anything of business value. As I pointed out above, these devices are great for audiovisual media but YouTube and Instagram are awash with wannabe Spielbergs and Baileys. Only so many people can succeed in this field as Jamal Edwards has.

The digital divide we should be measuring is that between those with access to the skills and the technology to create new products and services, and those with the capability to consume them.

The Three Cs

This is a much harder measure. But I think it is possible. In discussions around the future of work and skills, I have come down to a simple ‘Three Cs’ of skills that are vital for participation in tomorrow’s increasingly digital economy. And no, one of them is not ‘Coding’ (at least not exclusively).

Curation is the ability to find, qualify and absorb information. It’s about search skills and fact checking, knowing the difference between something being written and something being objectively true.  It’s about being able to put that new knowledge into context. The ability to do these things fast, effectively and reliably is vital.

Creation is about synthesis and ideation. The ability to take information that you have discovered and use it to create something new. That might be code, it might be language, it might be design, it might be a new 3D-printed or micro-controlled product.

Communication is about your interface to the rest of the world. People remain at the heart of an increasingly digital society and economy. You need the personal and technical skills to be able to make your arguments and ideas compelling.

All of these skills can be taught and tested.

Breaking Barriers

If we are to break down the true digital divide, the one that threatens to bar many from economic participation in the growing digital society, we need to focus on issues greater than simple connectivity. We need to recognise that the increasingly dominant touchscreen devices that are becoming ever cheaper and easier to use, will not in themselves help us to bridge the gap. In fact they threaten to widen it. As touchscreens give way to voice and gesture interfaces, and we are further abstracted from the underlying technology, the threat only increases.

At home, at work and in education we need to understand the true nature of the digital divide and change our behaviour accordingly. The Three Cs can be taught and have to be, not just at school but beyond and throughout life.

We need to ensure that everyone has access to the tools of creativity, as well as consumption.

]]> 1