Book of the Future Are you ready for tomorrow? Tue, 29 Jul 2014 11:30:09 +0000 en-US hourly 1 Frugal Innovation and the Maker Movement Tue, 29 Jul 2014 11:30:09 +0000 Charles Leadbeater has a new book out. Leadbeater, writer, political adviser and all round big thinker, has turned his attention to the forces of innovation outside of Silicon Valley. Those inventors, activists and entrepreneurs who operate in a world of constraints as opposed to bountiful capital and light touch regulation. What he calls ‘frugal innovation’. I haven’t had a chance to read the whole book yet, but as usual the RSA podcast is a good start.

The idea of frugal innovation seems particularly relevant having spent the weekend exhibiting at Maker Faire. As regular readers will know, I like making stuff and most recently have been building a home automation system using open source hardware. This is partly for fun, partly because I’ve always wanted a smart home (Tony Stark envy), and partly as an experiment to show what’s really difficult about these things (the user experience design, in case you’re curious).

When I found out Maker Faire was running again this year at the Museum of Science and Industry, I decided to take a stall and show off my work. If you haven’t been to a Maker Faire, think of it as show & tell for grown-ups. There are some stalls there where people sell things, but mostly it’s about sharing what you’ve made (and learned) with other people, both fellow makers and members of the public – lots of them young, which was great. The kids loved playing with Jock the RoboRaspbian, my Raspberry Pi-powered, web-controlled toy robot. And the adults generally liked the idea of keeping an eye on their homes remotely, and being able to cut their energy bills by turning off all the lights their kids leave on.

Maker Faire Manchester

Maker Faire Manchester

The question I was asked most often was whether I planned to commercialise the system. The answer is always ‘no’. For a start I’m sworn off more start-ups for now – at least ones that aren’t connected to my core business. I don’t think there’s a lot of money to be made from the system I’ve built: Samsung, Apple etc have the manufacturing capability and supply chain to do things more efficiently and at greater scale than I ever could. But most importantly what I have built depends hugely on the work of others – something that was true for all the makers I spoke to at the Faire.

The software that sits in each of my home automation nodes is heavily based on ‘RESTduino‘, a project with multiple contributors, who have given their work to the community at no charge. The web platform uses libraries for various functions like talking to the nodes, graphing the data, and communicating with the energy monitoring system – all written by others and given to the community at no cost (‘open source’). Even the hardware I’m using – the Arduino – is open: anyone can replicate its design without licence fees.

All of this means it would be a complex affair to try and scale what I have built up into a profitable business. But without it I wouldn’t have been able to build anything at all – or if I could, doing so would have taken ten times as long and cost ten times as much.

In Leadbeater’s last book, We Think, he looked at mass creativity as exposed on YouTube and other social channels. The Maker Faire showcases a more physical level of mass creativity, enabled by the open sharing of different hardware and software components. Every Maker takes those components and builds something uniquely their own, to fulfil their particular needs (or wants). Generally they then share the new components they have built to bridge the gaps back to the community, and the process continues.

As access to these components, and the ability to replicate them, is increasingly commoditised, it will be interesting to see what effect this has on the concentrated innovation of the Apples and Samsungs of this world. Imagine you can search a database of products and systems for a solution to a problem/challenge you are facing. You find a design – of software or hardware – that appeals, and then render it out, either as an installable application or a physical product through a 3D printer and some purchased components.

This exists (to some extent) today in Thingiverse, it’s just not widely used by the average consumer yet. But in just a few years we might all be sharing, or consuming, each other’s frugal innovations.

]]> 0
Intersection: Where Macro Trends Meet Your Market Mon, 28 Jul 2014 11:30:19 +0000 Futurism is a practice with an increasing level of professionalism and process. Futurologists, trend forecasters, and strategists use a variety of different methodologies to understand what’s happening, filter the noise and try to inform and qualify their predictions.

At Book of the Future we have created our own approach that we call Intersection, to allow us to make practical predictions and inform the advice we give to clients. Specifically it is designed to help us understand and demonstrate how macro trends, related to or driven by technology, will impact on the specific sectors our clients are working within.

3D Lens

The process starts with our ’3D Lens’: we believe that in a specific place (the UK) and over a specific period (the next twenty years), technology will be the biggest driver of change. This is predicated on the simple fact that within the time and space boundaries specified, we are expecting only steady, linear change in the other classic PESTLE factors – Political, Economic, Social, Legal, Environmental. By contrast technology is advancing at an exponential rate, as described by Moore’s Law, and this advance is touching every area of life and work.

Of course there is a small chance that we will have a revolution or a massive natural disaster, or other shock event in the UK in the next twenty years. One that may have an enormous impact. But our role as futurists is not to try and second guess the un-guessable – the ‘black swan’ events. It is to help the organisations that we work with to adapt to the visible future, the one that we believe will define the macro picture and that is already defining it today. As William Gibson said, “The future is already here, it’s just not evenly distributed.” We operate in the small pockets of the future that are here today and we help to expand them to encompass our clients.

Primary Trends

Within the scope of our 3D ‘Lens’ we break the primary technology-driven trends down to five core areas:


Put simply, things happen faster. Rich data moves at the speed of light around the world. Financial transactions take place so fast that they can no longer be handled by humans.

This changes businesses: there’s no value in six month old data when someone else can supply it real-time.

And it changes expectations: consumers and business users alike are rapidly frustrated by anything but an instantaneous response.


If there is a technological solution to a problem and it doesn’t cause grievous social harm, then someone will probably implement it.

If there are legal, environmental, social, financial or technical barriers that need to be overcome, then they likely will be and sooner than you think.

Technology has been shrinking in size and cost, and growing in power and usability at an exponential rate for decades.

This trend will continue to the point where technology is near-invisibly integrated into the environment around us and we are not always aware when the capabilities we are using are ‘normal’ human, or augmented by technology


Business success in the past was often characterised by the ability to optimise processes, supply chains, prices.

This retains value, but as models, channels and demands are changing faster and faster, success is increasingly defined by agility: the ability to enter and conquer new markets and opportunities fast.

This affects the structure of organisations: rather than slick, vertically integrated monoliths they need to be stratified into loosely coupled layers.

Each layer interfaces with the other but might also interface with third parties, offering its thin layer of optimised service as a building block in other people’s value stack.


Technology has lowered the barriers to market entry. The capital costs of a start-up, barring any physical stock, are trending towards zero.

This means more players in any market but also more models, and more channels.

There won’t be a single paradigm in any industry any more: competitors may supply the same products or services in different ways.

Likewise, many and various sub-cultures and micro-markets will exist on a variety of standardised, open platforms.


There is a growing global community that exists outside of national borders. They share an increasingly common culture, albeit coloured by local norms.

Social networks now capture a huge proportion of the global conversation, just as digital media services ensure the wide spread of common cultural reference points.

The last step towards true globalisation will be brought about by the ease with which products can now be moved: as data.

The rules of manufacturing and the supply chain are about to be radically re-written by hyper-local, automated manufacture, placing a huge emphasis on the ready supply of a variety of basic feed stocks.

Market Impact

With each organisation that we work with we look for the market impact of these macro trends, and we focus on areas of stress that already exist in the business. These are often surprisingly easy to find, at least when coming in with a fresh perspective, and appear in all areas of the business. New competition, changes in regulation, rising materials or labour costs, poor customer or supplier engagement, falling budgets or margins, breakdowns in compliance. You can usually find some of these in the industry press before you even start interviewing members of the organisation.

For each stress that we find, we test it against the macro trends and look to see whether it might be mitigated or exacerbated. We try to estimate the scale of the potential impact. Sometimes this is very hard, sometimes it is quite straightforward: if a formerly physical product can now be delivered digitally, the supply chain costs are likely to be orders of magnitude smaller.

Once we have assessed the market impact we can begin to rank the intersections and focus on those that will have the greatest effect. In reality the solutions we design around these intersections are often structural changes that will help to address others as well.

Narrative vs Empirical

You couldn’t call this process scientific. There is no repeatable experiment. Different people following the same methodology might achieve a different result (though we are trying to formalise the process within the organisation at least, so that there is consistency across future engagements). But the evidence from our interactions with clients over the last 18 months is that it undoubtedly valuable.

Feel free to use the information this post to try to replicate the process in your organisation. Or if you’d like some help, you can always drop us a line.

]]> 0
Breaking Band: Building a Better Internet Infrastructure Wed, 23 Jul 2014 09:43:29 +0000 If you follow me on social media, you will know that my broadband is down. It has been since the storms on Saturday.

I have no problem with my service going down. We’re going to have to get used to crazy weather over the next few years, and this lightning storm was like nothing else I have experienced in the UK. Lightning hit something, or water got somewhere it shouldn’t and things went down.

(Sh)It happens.

The problem is what happens next. Four days of wrangling with poor information, idiotic ‘customer service’ scripts, and under-equipped call centre staff insisting the problem is with my third-party router (only installed because the supplied one was utterly unreliable). Broken promises and repeatedly missed (self-imposed) deadlines. It took a concerted effort on Twitter to make something happen. That something is an engineer who has to come to my house today and work back from there, despite multiple customers in my area being simultaneously taken out (apparently not enough to justify it being called an ‘outage’).

There is no way this is an efficient way to run a business. But based on the chorus of recognition I’ve had across Facebook, Twitter and LinkedIn, my experience isn’t unusual for BT customers.

The Fourth Utility

Me whining doesn’t make for a great blogpost though and that’s not what this is about (though the scripts of the conversations with some of the call centre staff are pretty amusing). My point is that the internet is the fourth utility and it needs to be treated as such.

In the days when email and the web were the only things carried over the internet, it was annoying but not the end of the world if it went down. After all, for the first few years of consumer internet it was usually down: you had to dial up to get access.

But today the internet carries much more than these intermittent services. Across entertainment, environment and security it has become a platform in its own right on which we are increasingly reliant.

Without the internet I can’t access services that I pay for, like Netflix and Spotify. This adds to the ‘lost value’ of it being down.

My Nest thermostat can’t communicate with the world to find out what weather conditions it should be responding to. And I can’t communicate with it to turn it on and off, potentially reducing my comfort and costing me additional money in gas.

I can’t monitor the cameras covering parts of my property, or get alerts from my home automation system about intruders, floods or fires.

These are all what you might call ‘first world problems’ today, and I know I’m in the minority as a user of all these services. But they will be commonplace before long. And I pay good money for my internet platform to be able to use them.

Connected Age

We are moving to an age of ever-increasing connectivity. Some might decry our reliance on machines and their interconnection and I understand their concerns. I always think about the overweight chair-bound slobs in Disney’s Wall-E, beholden to their robot servants. But history suggests that technological advance usually drives life improvements, and human beings find ways to mitigate the risks they present.

If we are to continue the pace of technological advance, and retain our place as one of the more technologically-advanced nations, then we need to change the way we treat broadband provision. We need to stop looking at it as a luxury and understand it as a utility, and frame policy and provision appropriately.

At the moment there is far too little competition at the right levels of the market. Having most providers beholden to BT’s Openreach infrastructure does not drive innovation. The special deals that the government has with the big providers (BT and Virgin) to discount the tax they pay on their cables prejudices the market against new entrants. Regulation discourages the opening up of access to existing assets to allow the sharing of ducts, poles and other routes by providers, utilities and transport companies.

If my connection goes down in a few years time I would like my provider to know before I do and tell me. I’d like them to start the diagnosis and repair automatically, before I have called and without any human intervention. If I’m unhappy with their service I’d like to know that there are genuine, physical connection alternatives for my service, not just a re-branding of the same pair of wires.

Ofcom has highlighted that our broadband provision is some of the best in Europe. I’d agree with the FSB: it’s not good enough.

]]> 0
Emerging from the Colossal Cave Thu, 17 Jul 2014 21:21:32 +0000 This post is based on the script from two presentations I gave this week at Creative Kitchen in Liverpool, and Tameside Together in Manchester. You can see the presentation, built using Impress.js, here.


You’re familiar with Moore’s Law, right? Coined by Intel co-founder Gordon Moore back in 1965, it suggests (based on the evidence Moore had witnessed back then) that the number of transistors that can economically be put on a silicon chip doubles every two years. In other words, your computing bang for your buck has been growing at an exponential rate for nearly fifty years.

This law, and a number of parallel laws about the speed of our digital connections, and the amount of stuff we can stick on a hard disk, have described the technology revolution. Devices getting progressively smaller, cheaper, faster, better.

But I think they miss a vital component of what has changed about technology: it has become more human.

I’ve often described early computing experiences as like travelling to an alien planet. The machine you interacted with was a giant monolith, housed in its own environment, speaking its own language and using its own customs.

Now I have a new analogy.

One of the first computer games I ever played was Adventure, or Colossal Cave, as it was otherwise known. It was on a BBC Micro. I was a bit young initially, but I have vivid memories of my dad and cousin getting quite into it. Colossal Cave was a swords and sorcery epic delivered entirely through the medium of a text interface. ‘Choose Your Own Adventure’ or ‘Role & Play’ novels (perhaps also only familiar to people of my particular vintage) on the small screen.

Now I can’t remember the exact plot or characters of the Colossal Cave, but I imagine, somewhere in the depths of this cave, you may have found an ogre. This, for me, is the early computer. Giant, hulking, slow-witted and inhuman.

A little closer to the surface, and a little more evolved, you may find an ork. These are your pre-GUI personal computers. Communication is easier, but they’re still dim and gruff.

Then come your goblins, smaller nimbler and more able to interact. Laptops with graphical interfaces, and even access to the Internet.

Today the elf-like smartphone is all-the-rage. Slender, attractive, and much closer to human in its abilities to interact with touch, motion and voice.

But the elf remains at the edge of the cave. It can look out into the light and shout to us, but it can’t influence our physical world. Without some form of prosthetic there are hard limitations on its reach and strength.

The history of computing over the last half-century for me is one of evolution. Of computers evolving towards a state where their interactions with us are not limited to the screen, and instead they can communicate with us on all the levels that we communicate with each other, and change our environment around us.

As designers and coders we have for years been shining a torch into the Colossal Cave, briefly illuminating the intelligence inside so that we can interact. Now is the time for the computers to emerge from the cave and begin to communicate with us on our terms. But they need our help to do so.

Stepping back from my well-stretched analogy for a minute, there is good reason for us to help.

Do we really want to interact with data via a screen? Even the loveliest high-resolution, touch display, is an artificial environment relative to the majesty of the world around us. It’s also incredibly low-bandwidth. Think about the breadth of senses you have, through which your brain manages to process information, microsecond by microsecond. Why limit ourselves to interacting over a few million pixels when such rich experiences are available to us?

Computers now have so much data at their disposal, and the intelligence to process it, that we can let them be autonomous. The screen and keyboard was created when we had to manually provide them with all of their inputs, all of their instructions. That is no longer the case. Computers can make decisions based on time, date, weather, environment, location, your social graph and any number of other data points. Why bind ourselves to manual control when they are capable of taking on tasks we no longer need to do?

There is a challenge here though. More than one in fact.

The first one is the age-old sci-fi question: should we? Should we give them this much power? Should we leave behind manual labour? What does it mean for jobs?

The answers to these questions are book-length in themselves, but I’m inclined to think we should accept and even encourage this next step in technological progress. For the simple reason that there are more challenges for human minds to tackle. Why not hand the problems we have already nailed over to machines, if they can solve them more efficiently?

The second challenge is around ‘how’. Because for all my bravado and optimism above, this stuff ain’t easy. Or more specifically, the user experience design challenge isn’t easy.

Here’s an example. I’ve been building my own home automation system, as I have documented on this blog. This is both fun (if you’re a geek like me) and a serious experiment: I’m using the smart home as a small scale model for the smart city. The basics are simple: a few hours, a few quid and some cobbled-together code gets you a system that measures all sorts of environmental variables and allows you to trigger electrical devices in response. But as soon as you start trying to design the user experience, it starts to get really complicated.

Take a simple lamp. I want lamps to come on if it’s dark and when there’s someone in the room. And more importantly, turn off when the room is empty, saving me money and cutting my carbon footprint. You’d think the rules for that would be pretty simple, and they are until human behaviour gets involved.

Because sometimes we like it being dark. When we’re trying to sleep, or get a little cosy on the sofa to watch a film. When the house keeps turning the lights on in those situations, it gets pretty annoying. So what do you do? Create modes? Change behaviour throughout the day? Have a manual override? All of these things are possible but what you realise is that the number of permutations is enormous: automating response to human preference is really hard.

This is why we need more people from the creative and digital industries to start experimenting with physical computing. Sure there are a few forward-thinking agencies playing with wearables and microcontrollers. But think about how many websites are produced each year. Imagine how fast we could change our environment and our economy if we produced even a fraction as many digital, physical devices.

There is a particular opportunity here in cities with a manufacturing heritage (and often a surprisingly strong living industry), and a more recent digital scene. Manchester and Liverpool are the two places where I’ve been spreading this message this week.

It’s a simple message and not a particularly original one, but I hope I have carried it to some new audiences. Computing is emerging from the darkness of the cave. Now is the time to greet it and introduce it to our world.

]]> 0
Smart Cities or Home Automation: It’s All About UX Mon, 16 Jun 2014 05:27:03 +0000 When you walk into my utility room, the lights come on. If it’s dark. The latest evolution in my long term project to make my dumb (but very pretty) old house smart is a rules engine. Every time I step in and my way is lighted, my heart leaps a little. It’s a simple thing but it entertains. I’m still getting used to not having to turn the lights off though.

In the utility room and my office, both rooms down in the cellar, the rules are pretty simple: it’s always dark, you always need the lights on. But as I’ve started thinking about rolling these rules out across the rest of the house, I’ve realised things aren’t going to be so simple elsewhere.

Take the living room. Imagine you have a similar rule there: if it’s dark, and someone enters the room, turn on the lights. Great. But then you want to turn the lights down, get cosy and… watch a film. What then? Every time you turn the lights off, the system turns them back on again.

So you start to get into conditionals: if someone turns the light off, leave it off. Then you come in the next night and stub your toe because the lights don’t come on.

Maybe you put it on a timer to reset. Maybe you have a series of programmable ‘mood’ macros that set the rules depending on what you’re doing at the time. The point is not that there aren’t solutions. It’s that they are inevitably complex and need thinking about. They need designing.

Lessons from Santander

Demonstrating points like this in practical terms is part of the reason I started Project Santander, my home automation project, named after the home of the European smart city project run by Telefonica.  The home is a good proxy for the city, where these design issues are only magnified.

Instead of having to satisfy my wife and children’s understandable demands for a house that doesn’t frustrate more than automate, imagine having to satisfy a whole city of people. This is the challenge facing the mayor of Santander and the team working with him from Telefonica and the University of Cantabria.

As I’ve highlighted before, the hardware challenges of Project Santander were (relatively) straightforward. I now have nodes around my house collecting temperature, humidity, light level, and presence, and allowing me to control things like the lights above. These nodes are not dissimilar to what’s being used in Santander. Yet they cost me just £10 each and were built entirely from off-the-shelf, open source, hardware and software. In Santander they rolled out 20,000 sensors for just EU1m – their pricing is not an order of magnitude different to mine.

These 20,000 sensors generate relatively little data to transfer and store: when there were 12,000 sensors in the city, they were storing just 5MB of data per day. This makes sense: a temperature reading can be stored as a single byte of data – potentially less. Do the maths:

  • 1 byte per reading
  • 60/5 = 12 readings per hour = 12 bytes per hour
  • 12×24 = 288 readings per day = 288 bytes per day per sensor
  • 12,000 x 288 = 3,456,000 bytes per day = 3,375 kbytes per day = 3.29 MB per day

Bear in mind the temperature probably doesn’t change every five minutes. And even if it does, you can do some smart things to store many fewer readings than this, or store them more efficiently. Likewise with humidity, noise level, light, parking space usage. This is what accounts for the fact that many nodes have more than one sensor. This stuff is not particularly ‘big’ data.

So it’s cheap and easy to collect, transmit and store. What’s the challenge?

Making it useful. Making it intuitive. Making it human.

This is a user experience (UX) challenge. A design challenge. And it’s an opportunity that I don’t think that much of the tech community  - particularly those with the biggest skills for and focus on user interface design – has yet grasped.

]]> 2
Stop Redesigning the Web. Start Redesigning the World. Thu, 22 May 2014 07:44:29 +0000 Friction ignites innovation.

The examples are all around us. Take banking. I tried to send money abroad this week. Urgh. A thoroughly 20th century experience, replete with long acronyms, complex codes and lots of cost – 25% of the amount I was trying to send.

It’s no surprise that finance, banking and payments are hot spaces for innovation, from crowd funding and peer funding, to merchant systems, to whole new currencies. Organisations like PayPal, iZettle and bitcoin are tackling a staid old system that has failed to move truly into the internet age.

That age is characterised not just by technology but by culture: openness and sharing, of hardware and software, interfaces and protocols. A culture of speed and action.

Increasingly that culture is moving out of the digital world and into the physical. The new hardware categories – wearables, smart home, 3D printing – are much more open than their predecessors. If the hardware itself is not an open design, based on off-the-shelf components, then the software usually offers an API. Having had my Nest smart thermostat installed, I can’t wait to start playing with its API, integrating it into my own home automation system.

That system, Project Santander, has itself been built on these internet principles: rapidly prototyped using off-the-shelf hardware and shared software components. The most challenging part of its construction? The core software.

This software has been created using the simplest of web technologies: PHP, MySQL, HTML, CSS, Javascript. Because I don’t know anything else – in fact I barely know this. As I will happily concede to anyone, I am no coder.

Imagine what you can do with greater skills. Imagine the problems you can tackle. For me what is exciting about the ‘Internet of Things’ is the application of all those internet principles and skills to physical world problems. Skills of design and code that used to be confined to tackling problems in the virtual realm can now be applied to the physical. Energy, safety, health, fitness, food, education, and much more; the possibilities are endless.

This theme has a particular relevance and resonance in Greater Manchester, a place where great leaps forward in the science and technology of the physical world and the virtual have been made. Officially there are 45,000 people in the digital and creative sectors in Greater Manchester.

Imagine what we can do with the world if our digital skills are increasingly applied to physical problems.

]]> 0
The New Digital Divide: Makers and Consumers Wed, 14 May 2014 10:10:08 +0000 I have a new laptop, at least for the length of this trial. The team at Dell have loaned me an XPS and I have to say it’s flippin’ awesome. OK I don’t have to say that – it wouldn’t be much of a trial if I did – but it’s true.

In between trials of new machines I operate on a five year-old desktop or a six year-old laptop. Both are perfectly functional but limited. The laptop performs admirably for its age, thanks to a lightweight Linux OS, but unfortunately its frame is anything but lightweight: more luggable than portable. The desktop is very comfortable to use with its big screen and a posh mouse and keyboard (thanks to a never-ending trial from Logitech). But a lack of RAM means it becomes a little ponderous when running lots of Chrome windows or anything else taxing.

By contrast this new laptop has everything: slender metal frame, Core i7 processor, buckets of RAM, and a battery that lasts so long I’ve stopped bothering to carry the charger. Even if I use the laptop to charge my phone and other devices it seems to get me through days of work.

I can’t definitively say this is the best machine out there for the money – it’s not that sort of test. But its sheer capability has reminded me of something: the dramatic difference that remains between a ‘real’ computer and a tablet or smartphone. For me this is an increasingly important frontier in the digital divide.

Makers and Consumers

Because I’m using this machine as my main device for the period of the trial, I’ve had to install my regular software stack on it. I could automate this process and probably will in future, but it’s actually quite interesting to install things as and when the need arises. It makes you very aware of the software on which you’re most reliant. In the past this approach has also made me very aware of the (un)availability of an internet connection when you need one. But on my second trip over the Pennines in as many weeks, I find myself happily downloading hundreds of megabytes of software over Three’s 3 and 4G networks. There’s a reason I have an unlimited contract…

The software I have installed started with a browser or two: Chrome (for browsing, mail and apps) and Firefox (for web design and testing). Then a text editor (Bluefish) and version control (Git). Then an office suite or two – LibreOffice and MS Office. And finally the Arduino IDE for more development on my home automation system and robots. I’ll probably add GIMP and Inkscape at some point but I haven’t needed them yet.

Now, browsing I could do on a tablet. Email too. I’m pretty adept at typing on a screen and have a nice dinky Logitech (again) keyboard for my iPad Mini. But code? Version control? Spreadsheets? Document design? Presentations? None of these are things I would like to tackle on a tablet today. For these things a laptop or desktop is ideal. In fact, they are necessary.

Consumption not Creation

Tablets and smartphones today are tools of communication and consumption, not creation. There are two reasons for this. Firstly, the interfaces. Ancient though it may be, the keyboard and mouse combination remains our best interface to most of the tools of digital creation.

The exceptions are audiovisual: tablets can competently capture audio, video and images, and using a stylus designers and artists can draw on them. But for most other tasks the touchscreen interface lacks fidelity: even if you can capture your words, manipulating the documents you’ve written is a massive PITA.

This is not just the fault of the screen and fat fingers: the user interface trades off capability for ease of use. This is the second reason that touchscreen devices are limited. The operating systems and apps that sit on them are designed to be incredibly intuitive and usable with a few touches and swipes. This is great, but it means they are usually simplified to some extent, leaving you without the power and control that you might be used to on a desktop or laptop. The power and control you need to be a true digital creator.

The Real Digital Divide

The digital divide is generally accepted to mean the gap between the connected and the unconnected. Those with and without access to the internet. Today in Britain 83% of households have some form of internet access, with the majority of those that don’t reporting that it is lack of need/desire that stops them, rather than a lack of finance or skills. A large proportion of those are over 75. In short, over the coming years the digital divide by this measure is likely to narrow significantly, leaving a hard core for whom skills, disability and cost are the issues. These are issues that can and should be tackled, since it is increasingly hard to navigate modern life without internet access. Not having access can put you at a significant disadvantage from a consumer perspective, as much as anything else: things bought online are often cheaper.

With my futurist’s hat on though, I am more concerned about a different digital divide. That between digital consumers and digital creators.

Putting a connected tablet or smartphone into someone’s hands and equipping them with some basic skills may enable them to participate in digital life. They can use eGovernment services, shop and ‘join the conversation’ on social media. But they can’t make an awful lot – at least not anything of business value. As I pointed out above, these devices are great for audiovisual media but YouTube and Instagram are awash with wannabe Spielbergs and Baileys. Only so many people can succeed in this field as Jamal Edwards has.

The digital divide we should be measuring is that between those with access to the skills and the technology to create new products and services, and those with the capability to consume them.

The Three Cs

This is a much harder measure. But I think it is possible. In discussions around the future of work and skills, I have come down to a simple ‘Three Cs’ of skills that are vital for participation in tomorrow’s increasingly digital economy. And no, one of them is not ‘Coding’ (at least not exclusively).

Curation is the ability to find, qualify and absorb information. It’s about search skills and fact checking, knowing the difference between something being written and something being objectively true.  It’s about being able to put that new knowledge into context. The ability to do these things fast, effectively and reliably is vital.

Creation is about synthesis and ideation. The ability to take information that you have discovered and use it to create something new. That might be code, it might be language, it might be design, it might be a new 3D-printed or micro-controlled product.

Communication is about your interface to the rest of the world. People remain at the heart of an increasingly digital society and economy. You need the personal and technical skills to be able to make your arguments and ideas compelling.

All of these skills can be taught and tested.

Breaking Barriers

If we are to break down the true digital divide, the one that threatens to bar many from economic participation in the growing digital society, we need to focus on issues greater than simple connectivity. We need to recognise that the increasingly dominant touchscreen devices that are becoming ever cheaper and easier to use, will not in themselves help us to bridge the gap. In fact they threaten to widen it. As touchscreens give way to voice and gesture interfaces, and we are further abstracted from the underlying technology, the threat only increases.

At home, at work and in education we need to understand the true nature of the digital divide and change our behaviour accordingly. The Three Cs can be taught and have to be, not just at school but beyond and throughout life.

We need to ensure that everyone has access to the tools of creativity, as well as consumption.

]]> 1
Never Trust Someone Who Offers the Same Answer to Every Question Mon, 12 May 2014 09:29:55 +0000 One of the challenges of being a futurist is communicating the difference between what you believe to be true, and what you would like to be true. I try to maintain a good distance between the two.

Last week I was speaking at the Wired?14 customer event for Daisy Group PLC, about the impact of  technology on our personal and working lives. I highlighted that the link between economic growth and employment has been broken since 1999, referencing The Second Machine Age. After centuries where mechanisation and automation has been part of a cycle of creative destruction that ultimately generated greater overall wealth, digital technology seems to be concentrating wealth in the hands of few and destroying more jobs than it creates.

I could be accused of being a bit of a cheerleader for technology: I believe that it has been a huge factor in our increasing health and wealth over the recent past. But I’m not suggesting that it is an unalloyed good: technology presents issues we have to tackle.

I was collared after the event by someone who took issue with my analysis. He believed markets would solve the jobs problem – in fact he believed markets could solve every problem. I suggested the state would absolutely have a role to play and intervention would be required at some point if we were not to have a deeply unequal society (even more so than today). My challenger scoffed and painted me as on old-school socialist in favour of some form of centrally-planned economy.

Arguing against what you would like someone to be saying is often easier than arguing against what they are saying.

I was raised in a fairly lefty household and retain a lot of the values that I absorbed there, particularly around the role of the state. I’ve since spent nearly fourteen years entirely in private business – nine of those self-employed and growing my own businesses. I recognise where the market can do good things. What I don’t believe is that the state can solve everything or that markets can solve everything. To me either of those views is a kind of dangerous fundamentalism, little different to the worst excesses of religion.

Believing that there is one answer to many problems is generally a huge error.

Being Charitable

At the end of the week, I headed off to Edinburgh to run a workshop on the future of charities for the SCVO. Here I highlighted the growing role of technology in work and home life, and how it is transforming each. As often happens, this was taken by some as an argument for the increasing role of technology in both of those spheres. “But human contact is important. We can’t do what we do unless we’re face to face,” is a refrain I hear often in these sessions.

This may be true. But that doesn’t change the fact that there are other organisations competing in the digital sphere for the same funds and few seconds of attention that are the lifeblood of many charities. The evidence would suggest that we have a limited reservoir of mental and financial capital to donate. Though they may not be able to deliver their most important services digitally, charities have to compete digitally in the worlds of campaigning and fundraising if they are to maintain their support, and therefore the ability to conduct their work in a face to face manner.

More than that, the current financial climate would suggest that charities need to find ways of introducing digital means into their most fundamental operations. Because they may well be trying to support as many or more people with reduced resources. Anyone can fund-raise now. JustGiving and other platforms give individuals the campaign tools that would only have been available to established charities in the past. Campaigns like the #NoMakeUpSelfie or Stephen Sutton’s appeal, show the incredible speed with which new media can attract funds – funds that will then not be given to other causes that might historically have received them. More than one charity has told me that their individual donations are going “off a cliff”.

One Size Does Not Fit All

In each of these sessions and others, for businesses, local government and charities, I have talked about Stratification: my model for how successful modern businesses increasingly seem to be organised, and a model I believe can be applied to lots of different organisations. What I don’t propose is that there is a single template that can be simply copied and pasted between different charities, businesses and councils. There might be a common idea that can inform the approach in each case, but each case must be handled differently. Just like the answer is not always market or state, the answer is not always “What would Amazon do?”

This is the reason I’m currently exploring ways to expand my consultancy operation – without building a traditional consultancy business. There’s lots of demand for people who can apply original thinking to challenging problems, particularly people who are literate in the current and next generation of technology-driven change – i.e. applied futurists. While my focus is on research and writing, I really like the ‘applied’ part of applied futurism: tackling challenges and driving innovation.

There’s also a broader point here: consultancy is a time-based business. One that it is hard to scale, completely in contrast to the software product businesses that are driving the current wave of automation – and arguably widening the gap between rich and poor. In consultancy you largely get paid for the work that you do just once. If you create a successful commercial software product, be it Facebook or Microsoft Office, you get paid many times over. In industry parlance, product-based businesses scale better than time-based businesses.

As products are increasingly commoditised and standardised, and even made open source, I can see the (im)balance between product and service beginning to change. There will always be new products to be created that can be sold at a high margin – I’m not naïve enough to believe that we have approached the point at which everything has been invented (and I’d be very disappointed if I thought that were true). The market is probably the right way to price, and incentivise the creation of, these new products.

But perhaps there will be an increasing base of products, devices, software and services that will be either free or cheap, because they are open or in highly competitive markets. Products and services where the only real money to be made is in the customisation, consultancy or personalisation. This is arguably already the case for the web.

Technicolour Riot

Unless the last fifteen years of data is a blip – and that’s possible – I don’t think the market is likely to solve the onrushing jobs challenge. I don’t think governments can either. Nor can the open source movement, standards bodies, charities or anyone else.


The joy of this planet is that the answers are rarely black and white. They’re not even shades of grey. They are a glorious technicolour riot of diversity. Any ideology that suggests one answer to the multitude of challenges and opportunities facing us, is guaranteed to be wrong, whether it proposes human beings or technology, markets or states.

If someone tells you they have one answer for everything? Don’t trust them.

]]> 0
Stratification: Has the Integration vs Specialisation Question Been Answered for Tomorrow’s Business? Fri, 04 Apr 2014 08:24:08 +0000 In the future, if you want a big and profitable business, are you better served focusing on your core value or owning every step in the supply chain, operating every component? It’s an age-old question, the answer to which has been subject to wide debate and big swings in fashion, probably since the earliest enterprises began.

Each time the issue is raised it comes with new buzzwords, different components having their own terminology for moving in or out of the organisation. Call centres (‘outsourced’), development (‘offshored’), telephony (‘hosted’), software (‘SaaS’), hardware (‘cloud’), etc. Logistics, marketing, finance, property, retail, manufacturing, design, and even the original product and service concepts themselves can all be offloaded.

In the 90’s and early noughties it was all about moving functions out of the business, like call centres and software development. And then came the turning of the cycle, the inevitable backlash. Companies now make a big point of having ‘UK call centres’. I’m hearing grumblings about outsourced development that may see similar slogans slapped on locally-built software.

Purity of Principle

The reality is that few businesses can exist in either ‘pure’ state: entirely lean and focused, or monolithic and integrated.

Take Apple, for example, a famously vertically-integrated business and one that bucked the trend through the 90s and noughties. Apple owns and controls a large number of the steps in the content supply chain. The design and manufacture of the devices, the software that runs on them, the shops that sell them and the online shops that sell the content for them. But it still outsources large parts of its manufacturing. It has to source components from a wide number of suppliers, some of them competitors elsewhere in the market. And it opens up its own software and content marketplace to a large number of suppliers.

Tiers of Business

It is this tier of the business that I find most interesting. Because Apple both competes in this marketplace with its own applications, and allows others to enter – even if the applications compete directly.

Amazon does the same, operating a vertically integrated business but allowing others – even competitors – access to certain tiers. Take the layer diagram I use to break down so many things into manageable chunks.

Stratification Layer DiagramAction Layer: This is the consumer.
Presentation Layer: This is, the online shop with which so many consumers are now familiar.
Processing Layer: This is Amazon Web Services. What few consumers know is that Amazon allows other companies, including other online retailers, to use its incredibly powerful and cost-effective computing platform to host their shops, applications, databases, and files.
Connection Layer: Amazon’s logistics operation is enormous and expanding. It doesn’t serve other retailers today, but it could.
Collection Layer: Likewise Amazon’s procurement operation is not yet open to others. But could it be?

This situation is replicated across technology businesses in hardware and software alike. Companies are increasingly tiering their organisations and recognising that those tiers have to be treated as standalone markets. There is no problem dealing with suppliers or customers in one tier who might be competitors in another. Witness Samsung making screens for itself and Apple. Intel fabricating chips featuring arch rival ARM’s core processor designs. Microsoft finally releasing Office for iPad.

Low Friction Interactions

What’s most interesting about this ‘co-petitive’ trend is not that it is happening in these different layers. This has been the case for some time – for example, Microsoft has long made software for, and been an investor in, Apple. What I find interesting is how the layers interact.

To give a practical example, I’m building a dashboard for my business. This will show me the key performance indicators for my weird hybrid of a business, at a glance. To do this I’m using a very nice framework called Dashing, built by techies at the ecommerce platform Shopify (a company which itself could be a nice case study in stratification). Because Dashing is built in a particular language (Ruby), that my usual web hosts don’t support, I had to try an alternative. Having heard good things I checked out I set my application up on Openshift and got it running before I’d really read around what it is.

Put simply, OpenShift acts as a smart management/integration/user interface layer over Amazon Web Services – Amazon’s web and application hosting platform. When I add my application to OpenShift what it actually does is set up hosting for me on Amazon’s platform. It can do this, seamlessly and transparently, because the interface between the two systems is entirely automated and programmatic. Even though a new order potentially means the reconfiguration of physical hardware, the conversation happens entirely in data.

This interface between the two companies and the two systems is incredibly low friction. Commands flow in one direction, costs and services flow back in the other. It means Amazon can afford to sell its services to lots of people in lots of ways, because the cost of reaching them and supporting them is incredibly low, further enhancing (or at least maintaining) the low prices. Because interactions happen in real time, and are data driven, monitoring and resolving any breakdowns in the service is quicker and if not easier, then at least based on good evidence – not always the case in human to human interactions.

People as a Service

Stratification typically applies to business units or functions. These can be neatly packaged with an interface on the outside that allows the relevant information and resources to be passed back and forth. But increasingly people are becoming package-able units too.

Through my conversations with Tim Lovejoy on the future of work, I’ve been looking a lot at the increasing drivers towards self-employment. At one end of the economy, self-employment is almost being mandated. Under-employment and zero hour contracts are driving people to juggle multiple ‘clients’, and supplement their income with other work. At the other end self-employment is ever more appealing: software and web-based services take much of the pain and cost out of running your own small business, and it remains highly attractive from a tax perspective. Businesses are even encouraging senior employees to move to more flexible working, recognising the potential cost savings on both sides and more importantly the valuable learning that executives bring back from other businesses.

This is essentially stratification of the workforce. Employees are becoming thin layers, or segments of layers in their own right. Via corporate collaboration software or virtual work exchanges, each has their own low-friction, data-driven interfaces to the other layers.

What is crucial in an arrangement like this is ensuring that the business retains access to the workforce that it needs, when it needs it. And that the workforce maintains a relationship with the business that supports their loyalty and motivation. Achieving this in a flexible role for a senior executive is one thing. Achieving it for a large workforce on zero hour contracts, quite another.

Advantages of Stratification

I believe that stratification via a low friction, programmatic interface between networked business units right down to the individual scale, is becoming an increasingly visible trend in the structure of business. And for good reason. It improves the interaction between layers in an organisation, and it allows those layers to be opened up to third parties. This approach, and the business philosophy attached to it, has a role to play in the development of many organisations across the public and private sectors.

Some of the key advantages it offers are:

Agility: One of the most common questions I am asked in consulting engagements is how companies and organisations can become more agile. By codifying and streamlining their interactions, stratification allows the different parts of an organisation to be move around more easily. New interactions with other departments or third parties can quickly be added to support new products, services or enhancements.

New Revenue Streams: In the organisations I deal with there is often untapped value, in various repositories of data or well-developed internal processes. Stratification appears to increase the visibility of these hidden treasures but also creates the means by which they can be accessed.

Reporting: Stratifying the organisation forces the automation of certain systems that may have held out until now, including reporting. I haven’t encountered many organisations where good quality performance data is reaching the people who need it in a timeframe that many would consider ideal – i.e. real-time.

Disadvantages of Stratification

By its very nature stratification lays bare the inefficiencies in a business and drives automation. As many people are noting now, digital automation may drive productivity but it doesn’t create jobs. Like any business change, stratification will encounter resistance and this resistance will be reinforced by the level of transparency it creates around the activities of departments and individuals.

Principles of Design

Where organisations have succeeded in stratification there seem to be some clear design principles emerging.

Codify Inputs and Outputs: The first step is to break an operation down into clear units that have defined and repeatable inputs and outputs. This won’t be possible with all pieces of the business, but in some cases this may highlight where one unit is being tasked with multiple jobs that ought to be split. For example, it can be hard to operate efficiently when one team is tackling parallel work streams with very different work flows, time scales and deliverables.

Systematise Processes: While most of our work (though not all) may be computerised, much of it is not systematised. The work flow in departments is not a defined, documented process. Instead knowledge is locked in people’s heads. Processes are carried out using general office software packages and email, rather than tools that encapsulate the process in themselves. These tools can guide users through a process and check for errors along the way, increasing quality, reducing completion time and enabling new staff to be trained very quickly. Most importantly systematising the process automates the production of performance data, giving management much clearer insight into each corner of the business.

Open Data: Data is the language of stratification. This is not a diminution of the importance of human relationships in minimising the friction in a business, but a recognition of the fact that the speed and precision of data can overcome some of the limitations in a purely ‘meatspace’ process. Opening up restrictions and overcoming fears about the publishing of data, internally and externally, privately and publicly, is a huge part of the stratification process.

Publish APIs: An API is an application programming interface. It is the means by which commands and data can be sent and received from a piece of software by another piece of software rather than a human user interface. Think of it like the pins and holes on a piece of Lego that allow it to connect to others. Just like Lego, with an API to your software – or your departments – it is much easier to rearrange the building blocks of business into new configurations, supporting new processes or creating new services or products.

Applying Stratification

Stratification is not a wholly original idea, more a synthesis of some key trends in the development of organisations and businesses over the last thirty years. It is in itself a ‘re-combinatorial innovation’. That gives us confidence that even though the idea is at an embryonic stage, at least its component parts are sound.

Where stratification perhaps most extends existing thinking is in the idea of applying the principle of an API to a department or even a person, rather than just a piece of software. This will not be universally popular. But it’s important to repeat: this is not about valuing computers over people, or diminishing the role of people in an organisation. We are not expecting people to behave like software. Rather as has been pointed out in multiple books and studies (see, ‘The Second Machine Age’), there are things that machines do better and there are things that people do better. Recognising this and separating the work accordingly has long been a recipe for increased productivity.

Capturing process in software, using that captured process to handle inputs and outputs, and deliver good performance metrics makes a lot of sense. Using this capture process to create a wrapper around departments that can then be understood and manipulated as logical blocks in the organisation makes sense. Allowing third parties to access the now open, standard interfaces that connect these blocks together also, we believe, makes sense. Inside that well-defined framework should be greater, rather than lesser, opportunity for people to do the things that people do best: interact, innovate and improve the business around them.

]]> 0
Humanoid Robots: The User Interface for the Internet of Things? Tue, 11 Mar 2014 11:20:54 +0000 Two of my projects are merging. This morning, inspired by a second viewing of Iron Man 3 (yes, I want to be Tony Stark) I finally finished assembling RoboRaspbian. And realised that he should actually be part of Project Santander. Cue more modifications…

Quick recap: I like to build stuff, partly for fun, partly to exercise my brain, and partly to test ideas out about the future. RoboRaspbian started relatively simply: I found an original RoboSapien toy, minus his remote control, in a charity shop for £3 or so. Seemed like a bargain but I wanted to be able to control him.

I looked at replacing his microprocessor but this seemed unnecessary when I could get all the movement I wanted just by sending the right commands to the existing one. I found someone had turned a Raspberry Pi into a universal remote control capable of outputting the right commands, and so the project became ‘strap a Raspberry Pi to the back of a RoboSapien’. With some relatively simple electronics to bridge the two (read LOTS of trial and error), RoboRaspbian was born.

Then I thought: wouldn’t it be nice if I could make him talk – more than just his original, limited vocabulary (mostly yawns and farts). Get him to converse with the kids. I’d already done this on a previous robot project, Sammy. So I added in the text-to-speech engine Flite, and added an amplifier to the small amount of spare space inside the robot’s chest. This hooks the audio output of the Raspberry Pi into the original speaker, matching the volume of his in-built sounds.

So now I have a talking, gesturing robot. Nothing particularly smart about him though: he is entirely human-controlled. But hang on a second: I’ve spent the last few months rolling out sensors around my house*. They could feed him with all sorts of interesting data. Getting him to tell me when certain areas were too cold, or when humidity levels got too high would be really cool. Plus he can gather all sorts of information from the web: new emails/tweets etc.

So, we have a new plan: humanoid (ish) robot becomes the user interface for the Internet of Things. Down the line I can add a microphone and make the voice interface two-way using something like PocketSphinx or Google’s Speech API.

This will require one simple (ha!) hardware modification: if the robot is to be on 24/7, I’m sure as hell not running him on batteries.

*Note: As I write this, the blog is currently rather behind vs my actual progress with the home automation project, which is stably monitoring multiple conditions in four different rooms,
]]> 0