Book of the Future Are you ready for tomorrow? Fri, 04 Apr 2014 08:24:08 +0000 en-US hourly 1 Stratification: Has the Integration vs Specialisation Question Been Answered for Tomorrow’s Business? Fri, 04 Apr 2014 08:24:08 +0000 In the future, if you want a big and profitable business, are you better served focusing on your core value or owning every step in the supply chain, operating every component? It’s an age-old question, the answer to which has been subject to wide debate and big swings in fashion, probably since the earliest enterprises began.

Each time the issue is raised it comes with new buzzwords, different components having their own terminology for moving in or out of the organisation. Call centres (‘outsourced’), development (‘offshored’), telephony (‘hosted’), software (‘SaaS’), hardware (‘cloud’), etc. Logistics, marketing, finance, property, retail, manufacturing, design, and even the original product and service concepts themselves can all be offloaded.

In the 90’s and early noughties it was all about moving functions out of the business, like call centres and software development. And then came the turning of the cycle, the inevitable backlash. Companies now make a big point of having ‘UK call centres’. I’m hearing grumblings about outsourced development that may see similar slogans slapped on locally-built software.

Purity of Principle

The reality is that few businesses can exist in either ‘pure’ state: entirely lean and focused, or monolithic and integrated.

Take Apple, for example, a famously vertically-integrated business and one that bucked the trend through the 90s and noughties. Apple owns and controls a large number of the steps in the content supply chain. The design and manufacture of the devices, the software that runs on them, the shops that sell them and the online shops that sell the content for them. But it still outsources large parts of its manufacturing. It has to source components from a wide number of suppliers, some of them competitors elsewhere in the market. And it opens up its own software and content marketplace to a large number of suppliers.

Tiers of Business

It is this tier of the business that I find most interesting. Because Apple both competes in this marketplace with its own applications, and allows others to enter – even if the applications compete directly.

Amazon does the same, operating a vertically integrated business but allowing others – even competitors – access to certain tiers. Take the layer diagram I use to break down so many things into manageable chunks.

Stratification Layer DiagramAction Layer: This is the consumer.
Presentation Layer: This is, the online shop with which so many consumers are now familiar.
Processing Layer: This is Amazon Web Services. What few consumers know is that Amazon allows other companies, including other online retailers, to use its incredibly powerful and cost-effective computing platform to host their shops, applications, databases, and files.
Connection Layer: Amazon’s logistics operation is enormous and expanding. It doesn’t serve other retailers today, but it could.
Collection Layer: Likewise Amazon’s procurement operation is not yet open to others. But could it be?

This situation is replicated across technology businesses in hardware and software alike. Companies are increasingly tiering their organisations and recognising that those tiers have to be treated as standalone markets. There is no problem dealing with suppliers or customers in one tier who might be competitors in another. Witness Samsung making screens for itself and Apple. Intel fabricating chips featuring arch rival ARM’s core processor designs. Microsoft finally releasing Office for iPad.

Low Friction Interactions

What’s most interesting about this ‘co-petitive’ trend is not that it is happening in these different layers. This has been the case for some time – for example, Microsoft has long made software for, and been an investor in, Apple. What I find interesting is how the layers interact.

To give a practical example, I’m building a dashboard for my business. This will show me the key performance indicators for my weird hybrid of a business, at a glance. To do this I’m using a very nice framework called Dashing, built by techies at the ecommerce platform Shopify (a company which itself could be a nice case study in stratification). Because Dashing is built in a particular language (Ruby), that my usual web hosts don’t support, I had to try an alternative. Having heard good things I checked out I set my application up on Openshift and got it running before I’d really read around what it is.

Put simply, OpenShift acts as a smart management/integration/user interface layer over Amazon Web Services – Amazon’s web and application hosting platform. When I add my application to OpenShift what it actually does is set up hosting for me on Amazon’s platform. It can do this, seamlessly and transparently, because the interface between the two systems is entirely automated and programmatic. Even though a new order potentially means the reconfiguration of physical hardware, the conversation happens entirely in data.

This interface between the two companies and the two systems is incredibly low friction. Commands flow in one direction, costs and services flow back in the other. It means Amazon can afford to sell its services to lots of people in lots of ways, because the cost of reaching them and supporting them is incredibly low, further enhancing (or at least maintaining) the low prices. Because interactions happen in real time, and are data driven, monitoring and resolving any breakdowns in the service is quicker and if not easier, then at least based on good evidence – not always the case in human to human interactions.

People as a Service

Stratification typically applies to business units or functions. These can be neatly packaged with an interface on the outside that allows the relevant information and resources to be passed back and forth. But increasingly people are becoming package-able units too.

Through my conversations with Tim Lovejoy on the future of work, I’ve been looking a lot at the increasing drivers towards self-employment. At one end of the economy, self-employment is almost being mandated. Under-employment and zero hour contracts are driving people to juggle multiple ‘clients’, and supplement their income with other work. At the other end self-employment is ever more appealing: software and web-based services take much of the pain and cost out of running your own small business, and it remains highly attractive from a tax perspective. Businesses are even encouraging senior employees to move to more flexible working, recognising the potential cost savings on both sides and more importantly the valuable learning that executives bring back from other businesses.

This is essentially stratification of the workforce. Employees are becoming thin layers, or segments of layers in their own right. Via corporate collaboration software or virtual work exchanges, each has their own low-friction, data-driven interfaces to the other layers.

What is crucial in an arrangement like this is ensuring that the business retains access to the workforce that it needs, when it needs it. And that the workforce maintains a relationship with the business that supports their loyalty and motivation. Achieving this in a flexible role for a senior executive is one thing. Achieving it for a large workforce on zero hour contracts, quite another.

Advantages of Stratification

I believe that stratification via a low friction, programmatic interface between networked business units right down to the individual scale, is becoming an increasingly visible trend in the structure of business. And for good reason. It improves the interaction between layers in an organisation, and it allows those layers to be opened up to third parties. This approach, and the business philosophy attached to it, has a role to play in the development of many organisations across the public and private sectors.

Some of the key advantages it offers are:

Agility: One of the most common questions I am asked in consulting engagements is how companies and organisations can become more agile. By codifying and streamlining their interactions, stratification allows the different parts of an organisation to be move around more easily. New interactions with other departments or third parties can quickly be added to support new products, services or enhancements.

New Revenue Streams: In the organisations I deal with there is often untapped value, in various repositories of data or well-developed internal processes. Stratification appears to increase the visibility of these hidden treasures but also creates the means by which they can be accessed.

Reporting: Stratifying the organisation forces the automation of certain systems that may have held out until now, including reporting. I haven’t encountered many organisations where good quality performance data is reaching the people who need it in a timeframe that many would consider ideal – i.e. real-time.

Disadvantages of Stratification

By its very nature stratification lays bare the inefficiencies in a business and drives automation. As many people are noting now, digital automation may drive productivity but it doesn’t create jobs. Like any business change, stratification will encounter resistance and this resistance will be reinforced by the level of transparency it creates around the activities of departments and individuals.

Principles of Design

Where organisations have succeeded in stratification there seem to be some clear design principles emerging.

Codify Inputs and Outputs: The first step is to break an operation down into clear units that have defined and repeatable inputs and outputs. This won’t be possible with all pieces of the business, but in some cases this may highlight where one unit is being tasked with multiple jobs that ought to be split. For example, it can be hard to operate efficiently when one team is tackling parallel work streams with very different work flows, time scales and deliverables.

Systematise Processes: While most of our work (though not all) may be computerised, much of it is not systematised. The work flow in departments is not a defined, documented process. Instead knowledge is locked in people’s heads. Processes are carried out using general office software packages and email, rather than tools that encapsulate the process in themselves. These tools can guide users through a process and check for errors along the way, increasing quality, reducing completion time and enabling new staff to be trained very quickly. Most importantly systematising the process automates the production of performance data, giving management much clearer insight into each corner of the business.

Open Data: Data is the language of stratification. This is not a diminution of the importance of human relationships in minimising the friction in a business, but a recognition of the fact that the speed and precision of data can overcome some of the limitations in a purely ‘meatspace’ process. Opening up restrictions and overcoming fears about the publishing of data, internally and externally, privately and publicly, is a huge part of the stratification process.

Publish APIs: An API is an application programming interface. It is the means by which commands and data can be sent and received from a piece of software by another piece of software rather than a human user interface. Think of it like the pins and holes on a piece of Lego that allow it to connect to others. Just like Lego, with an API to your software – or your departments – it is much easier to rearrange the building blocks of business into new configurations, supporting new processes or creating new services or products.

Applying Stratification

Stratification is not a wholly original idea, more a synthesis of some key trends in the development of organisations and businesses over the last thirty years. It is in itself a ‘re-combinatorial innovation’. That gives us confidence that even though the idea is at an embryonic stage, at least its component parts are sound.

Where stratification perhaps most extends existing thinking is in the idea of applying the principle of an API to a department or even a person, rather than just a piece of software. This will not be universally popular. But it’s important to repeat: this is not about valuing computers over people, or diminishing the role of people in an organisation. We are not expecting people to behave like software. Rather as has been pointed out in multiple books and studies (see, ‘The Second Machine Age’), there are things that machines do better and there are things that people do better. Recognising this and separating the work accordingly has long been a recipe for increased productivity.

Capturing process in software, using that captured process to handle inputs and outputs, and deliver good performance metrics makes a lot of sense. Using this capture process to create a wrapper around departments that can then be understood and manipulated as logical blocks in the organisation makes sense. Allowing third parties to access the now open, standard interfaces that connect these blocks together also, we believe, makes sense. Inside that well-defined framework should be greater, rather than lesser, opportunity for people to do the things that people do best: interact, innovate and improve the business around them.

]]> 0
Humanoid Robots: The User Interface for the Internet of Things? Tue, 11 Mar 2014 11:20:54 +0000 Two of my projects are merging. This morning, inspired by a second viewing of Iron Man 3 (yes, I want to be Tony Stark) I finally finished assembling RoboRaspbian. And realised that he should actually be part of Project Santander. Cue more modifications…

Quick recap: I like to build stuff, partly for fun, partly to exercise my brain, and partly to test ideas out about the future. RoboRaspbian started relatively simply: I found an original RoboSapien toy, minus his remote control, in a charity shop for £3 or so. Seemed like a bargain but I wanted to be able to control him.

I looked at replacing his microprocessor but this seemed unnecessary when I could get all the movement I wanted just by sending the right commands to the existing one. I found someone had turned a Raspberry Pi into a universal remote control capable of outputting the right commands, and so the project became ‘strap a Raspberry Pi to the back of a RoboSapien’. With some relatively simple electronics to bridge the two (read LOTS of trial and error), RoboRaspbian was born.

Then I thought: wouldn’t it be nice if I could make him talk – more than just his original, limited vocabulary (mostly yawns and farts). Get him to converse with the kids. I’d already done this on a previous robot project, Sammy. So I added in the text-to-speech engine Flite, and added an amplifier to the small amount of spare space inside the robot’s chest. This hooks the audio output of the Raspberry Pi into the original speaker, matching the volume of his in-built sounds.

So now I have a talking, gesturing robot. Nothing particularly smart about him though: he is entirely human-controlled. But hang on a second: I’ve spent the last few months rolling out sensors around my house*. They could feed him with all sorts of interesting data. Getting him to tell me when certain areas were too cold, or when humidity levels got too high would be really cool. Plus he can gather all sorts of information from the web: new emails/tweets etc.

So, we have a new plan: humanoid (ish) robot becomes the user interface for the Internet of Things. Down the line I can add a microphone and make the voice interface two-way using something like PocketSphinx or Google’s Speech API.

This will require one simple (ha!) hardware modification: if the robot is to be on 24/7, I’m sure as hell not running him on batteries.

*Note: As I write this, the blog is currently rather behind vs my actual progress with the home automation project, which is stably monitoring multiple conditions in four different rooms,
]]> 0
Take a Picture, With Your Mind. Wed, 05 Mar 2014 13:00:15 +0000 Imagine taking a picture just by thinking.

You train a neural interface to recognise the patterns of activity that are fired when you hit the shutter button on a camera, or your phone. Then when you think that thought, the neural interface triggers snapshots from discrete – and discreet – wireless cameras distributed around your body. One in your glasses, one in your shirt button, one in your shoes.

Software in the cloud stitches the images together into a multi-megapixel whole and works out what the likely focus was meant to be, dynamically polishing the output, sharing it to your social streams and storing it for posterity.

This isn’t some wild sci-fi fantasy. It’s a very close reality.

Neural interfaces are already consumer-items, available for just a few tens of pounds in gaming systems. Recognising the same brain patterns being repeated should actually be relatively simple.

At Mobile World Congress last week Rambus showed me a camera the size of a pinhead. It needs no lens, and will cost less than 20p per unit once it is manufactured in volume.

Wireless data standards for short range transmission advance apace. Power requirements at the personal area range are low. And with the demonstrations the Alliance for Wireless Power showed me last week, a wireless charging unit could keep button-sized batteries powered up all day.

Send the images up to the cloud over 4G – it doesn’t have to be instantaneous if you’re not stood there holding your phone and waiting – or Wi-Fi. There’s loads of computing grunt on tap and automatic post-processing is already well developed.

This is real. The question is, do we want it?

People are already uncomfortable with Google Glass, but that stands out a mile. What happens when your smart wearable devices disappear into the fabric of your everyday clothing – a theme to which I keep returning because it is imminent.

It’s up to us to discuss this stuff and set some rules, if we want them,

]]> 0
Smart Cities: We Need to Talk Sun, 02 Mar 2014 09:16:22 +0000 Amid the hype and bluster of Mobile World Congress it is refreshing to hear someone admit they don’t know the answer. Francisco Jose Jariego Fente is Telefonica Digital’s Industrial Internet of Things Director. The question he willingly accepts he can’t answer is admittedly a tricky one: what is the business model for smart cities?

Telefonica has more evidence than most for what the answer, or answers, might be. Its project in Santander has proven there is little money to be made in the hardware: the city rolled out 12,000 sensors funded by a relatively small EU1m from the EU. And the sum of the data collected from those sensors, just 5MB per day, similar to a single photo or MP3 file, suggests there is very little to be made in its carriage or storage.

The biggest challenges, and hence the biggest potential revenues, come in processing and presenting the data in a useful form. This is where Telefonica has focused its efforts and is looking to commercialise the learning from the Santander experiment. IBM too has recognised that this is where the value lies.

But this value only becomes tangible when the rest of the smart city ecosystem is in place. Cities are complicated. They are managed by multiple authorities and commercial parties. They evolve constantly, reacting to the needs of their inhabitants. And those inhabitants themselves, who in many ways represent the city much more than its buildings or infrastructure, have a say in how it develops: any executive control is limited.

Building a smart city on a green field site like South Korea’s Songdo is one thing. But there are huge drivers to smarten all our cities. And that means retrofitting technology, processes and partnerships to an existing, evolved organic environment. One model isn’t going to fit every city. Making it happen will be a process of negotiation, integration, iteration. And there will be lots of different parties involved: political leaders, civil servants, service providers, technology companies, health services, police forces, property owners and most important of all, the citizens themselves.

Brokering a framework that keeps all of these people at least relatively happy, while delivering on the promise of smart cities is no small task. It will only come through dialogue. But it’s a conversation we need to have. Because the promise of smarter cities is too great to ignore.

In the first instance there is simply lower costs, both financially and to the environment. There are lifestyle benefits: less traffic, quicker parking, more efficient public transport. Taking things a step further, there are advantages to planners: recognising a noise problem in one place might inform a change in planning to a new building nearby, perhaps requiring materials that absorb or deflect sound, or the planting of trees as a screen. Ultimately, there is the prospect of properly understanding our cities and the interactions that make them live, so that we can make more informed decisions about their future, in local government, in corporations, and as individuals.

Smart cities have long held promise, but the complexity of the problem they present has retarded their progress. To get things moving, as we need to do, a broad and open conversation between all of the interested parties is required. To agree how the interactions will be managed, and vitally the costs and rewards will be divided.

]]> 0
There’s No Such Thing As A Digital Industry Wed, 05 Feb 2014 12:45:33 +0000 Last night was spent debating Manchester’s future as a home to technology innovation, with representatives from the city leadership, technology, telecoms, law, and finance firms. One issue really stuck with me. Everyone was keen on the idea of promoting the digital industry but there was frustration at the lack of a common voice for this sector.

The problem is this: no-one can speak for the digital industry because there is no digital industry. Not in Manchester. Not anywhere. What is loosely grouped into ‘tech’ or ‘digital’ is actually three or four (possibly even more) very distinct sectors with very, very different needs.

In Manchester I’d classify these as ‘Products’, ‘Services’ and ‘Infrastructure’ but you could probably add ‘Materials’ and ‘Advanced Manufacturing’.

Product companies are proper tech start-ups. Companies that are incubating an idea with great potential to scale. They need an environment to network and meet to start with, so that teams can naturally assemble. Then above all else they need time. Time means the right sort of finance, and low overheads: office space, connectivity etc. Once they reach scale they need access to talent: generally high level talent with specific, technical skill sets but also sales, support, marketing and creatives.

Services companies are largely marketing/digital agencies of one form or another. The skills they require are very different, as much creative and inter-personal as technical. These companies have limited potential for scale: it’s a highly competitive market. Growing means adding people and the cost of managing those people rapidly starts to diminish the focus of the founders. The best hope is reasonable scale, stability, and good margins, and ultimately perhaps a trade sale to a local rival or national network. What they need is opportunities to sell and access to contracts, from the public sector and large local companies.

Infrastructure companies might serve the start-ups, or the agencies, or any other businesses around Manchester. Depending on their particular focus the challenges might be access to power, the cost of laying fibre, or competing with unfairly advantaged national players. They need technical skills but those skills are generally very different to those required by the product or service companies.

Manchester’s agencies have a good representative body. But that body doesn’t speak for start-ups, and it doesn’t speak for the infrastructure players. In fact I don’t know of any single bodies that claim to speak for those groups and their needs.

If Manchester, and the UK as a whole, is going to have future economic success powered by technology-driven businesses, then those businesses need to be understood for what they are. Not conflated in groups under meaningless terms like ‘the digital industry’, ‘the technology sector’, or ‘the knowledge economy’.

]]> 1
You Don’t Want Twitter to be Judge and Jury, Just a Good Citizen Thu, 23 Jan 2014 10:07:03 +0000 Twitter Image Courtesy of

Twitter Image Courtesy of

Recently I was a witness in a trial. You don’t need to know the (frustrating, depressing) details, just that I turned up when asked, said what I had seen, and left again.

This is what I want Twitter to do.

In the wake of the appalling abuse hurled at Stan Collymore, this is what I spent a chunk of yesterday explaining to various BBC shows. We do not want an unregulated, foreign, social network to be the ultimate arbiter of acceptable behaviour online.

Yes it should have a fair use policy, and yes it should automatically block people in breach of that policy. But more importantly than that it should make it very, very easy for the relevant law enforcement agencies in any country to collect evidence and respond appropriately.

My main concern with the Collymore case, ignoring the evidence it has brought to light of the number of absolute tools out there, is the time he reports it has taken for Twitter to pass evidence to the police: six weeks in one case, according to Collymore.

Now I recognise that there are cases where Twitter should not be sharing evidence of the identity of anonymous users. But not here: these cases are clear cut racial abuse and threats of violence. There’s no argument for freedom of speech.

Twitter needs a means of rapidly recognising when the sharing of data is justified, and responding. But perhaps the police and Crown Prosecution Service need a better system of responding too.

It’s clear from the case of Caroline Criado-Perez and others that when prosecutions do come, they are often only of a tiny minority of offenders. Is there perhaps a way for the police to automate the collection of data from Twitter (and other social networks) and process prosecutions? Rapidly sending out warning letters (emails?) to caution people that their abuse has been noted and a prosecution may follow may help to thin out the stream of invective and limit it to the truly committed idiots.

This might sound a little oppressive and that concerns me too. But think of it like policing a drunken city centre on a Friday night. There will be lots of infringements but the police are unlikely to prosecute all of them. Instead they will dissuade most people from committing serious offences through their presence and a stern word at the right times.

This might not satisfy everyone, but it might help to keep Twitter and other public social networks open for the type of rich debate and sharing of information that so many of us enjoy and value.

]]> 0
Project Santander: Sticking Data in the Database Wed, 22 Jan 2014 22:00:36 +0000

 WARNING: TECHIE POST. Following my visit to the smart city project in Santander with Telefonica last year, I was inspired to start building my own smart home based on similar technologies. This is partly an exercise of my rusty engineering skills but mostly about learning the realities of smart cities/smart homes through experience. I’ll write up the lessons in a much less techie form, but for those who are interested, I’ll also be documenting the detail here.

In the last episode I got my first sensor up and running on the end of an Ethernet cable thanks to the RESTDuino sketch. Now I need to get the data this send back into my SQL database.

This proved to be incredibly simple. Now note: I am NOT a coder and this stuff is probably seriously ugly. But it works and that’s what I care about right now.

Into the code I had put together for the AlertMe API I added the Pest library, which makes accessing a RESTful interface using PHP dead easy. It’s then a simple task to extract the useful data from the little JSON string that comes back from my light sensor and squirt it into a new table in my SQL database.

It’s worth talking a little bit about the database here. I’m structuring it with a series of tables for each different type of data: power, temperature, light, etc etc. Rather than one for each room. This should mean that I don’t have to create a new table each time I add a new sensor. I can just record values indexed by their time stamp and room code. I’m hoping this should make things simpler.

Anyway, after just a few failed attempts (mostly down to my lack of understanding of JSON and arrays) my code was working and happily sticking data from the light sensor into my database.


]]> 0
Project Santander: Ethernet and REST Tue, 21 Jan 2014 22:00:25 +0000

WARNING: TECHIE POST. Following my visit to the smart city project in Santander with Telefonica last year, I was inspired to start building my own smart home based on similar technologies. This is partly an exercise of my rusty engineering skills but mostly about learning the realities of smart cities/smart homes through experience. I’ll write up the lessons in a much less techie form, but for those who are interested, I’ll also be documenting the detail here.

ArduinoWithEthernetShieldExcitement today. My Ethernet shield finally arrived from China. What can I say? I’m a cheapskate and I don’t mind waiting a little longer to save a few quid.

So, straight into testing this evening. I’ve decided to start with the RESTDuino example as the basis for my sketch. So I downloaded the relevant files and uploaded them to my test Arduino with the Ethernet shield attached. Note: first challenge, the sensor shield won’t fit on top of the Ethernet shield nor will they fit the other way around because the ICSP header gets in the way. Looks like I might be going down the prototype shield route to attach all my various sensors.

Anyway, I left the defaults as they were in the demo sketch and uploaded it to my board. I plugged an ethernet cable into the nearest wall port (I have more thanks to the fantastic Devolo dLAN kit I am currently testing) and hooked up an LED to pin 9 as suggested in the demo. I then hit the address – – that should have turned it on.


So a quick scan of the network using iNet, one of the most useful iPhone apps I own. Sure enough this showed me that the Arduino was on Despite it being assigned a fixed IP, it is clearly getting one on dynamic assignment. There’s something to sort down the line.

Anyway, stick the new address in – and sure enough, the LED lights up. Hoorah!

So, let’s get a bit more sophisticated. A sensor reading.

Back to my favourite sensor (OK the only one I had handy at the start of this exercise), the photocell. I hooked this up in a voltage divider arrangement with a 10K resistor and fed the signal line back to analogue pin 1.

Go to and what do I get? A JSON string that looks like this:


Superb! Even I ought to be able to work out how to pass than into my SQL database. But that’s for next time…

]]> 0
Mutable Language: How Tech Might Transform Talk Mon, 13 Jan 2014 16:55:10 +0000 Companies are singular. Not that you would know it listening to the Today programme this morning on Radio 4. The talk was all of what this company or that ‘are’ going to do.

This isn’t some grammar nerd’s rant. When I was in marketing I always struggled with this rule and ended up striking an ugly compromise most of the time. Saying that a company ‘is planning’ to do something just sounds cold and impersonal. Ascribing agency to an organisation seems wrong. It’s the people that are planning to do something, not the corporate entity.

This is just another example of how language changes over time. I’m sure at some point it will be strictly acceptable to say that companies ‘are’ doing something, even if it continues to grate with some. Language changes all the time and in the next few years technology is going to drive some very rapid changes.

I’m a little bit obsessed with an app called WordLens. It has been around a few years now, but it continues to amaze people every time I demonstrate it. Point the camera on your phone at some foreign text and WordLens live translates it into English and overlays the translation on the screen, over the original text. Suddenly foreign signs and menus appear in English. This is impressive enough on a phone screen. Imagine it integrated into Google Glass: you’re walking round in a foreign land yet everything appears to you in English.

Now add an audio equivalent that live translates the sounds you hear and feeds them to you as subtitles or straight into your ear, dipping the volume of the foreign language speaker to compensate. Suddenly many of the barriers to communication disappear.

Many, but not all. It will be a long time before a computer can cope with the finer nuances of language: accent, emphasis and dialect. While these technologies might help tourists to get by, they’re never going to let you speak like a native. In fact they will probably emphasise the importance of the words and linguistic idiosyncrasies that sit outside of the global digital dialect. The ones that signify particular geographical or social sub-cultures.

For those that become reliant on digital translation, there will probably be a counter-force to this, standardising languages globally. It will be hard to maintain British English when US English rules and spellings are being reinforced constantly on the screen right in front of your eyes. Sure there will be local dictionaries, but we know which will be the default.

In the future it is likely Google’s default dictionary that will determine the acceptability of ‘is’ or are. Even for Radio 4 presenters.

]]> 0
Project Santander: AlertMe API and Starting the Software Sun, 12 Jan 2014 21:54:09 +0000 WARNING: TECHIE POST. Following my visit to the smart city project in Santander with Telefonica last year, I was inspired to start building my own smart home based on similar technologies. This is partly an exercise of my rusty engineering skills but mostly about learning the realities of smart cities/smart homes through experience. I’ll write up the lessons in a much less techie form, but for those who are interested, I’ll also be documenting the detail here.

AlertMe-energy-kitOne of the big advantages of being a sometime gadget reviewer is that you get sent stuff. And some times it doesn’t have to go back. A few years and two houses ago, AlertMe sent me one of its home energy monitoring kits and I still have it. I finally got around to getting it set up in the new(ish) house a few weeks back and found out that AlertMe offers an API for remote access to the data.

This is cool. It means I will very simply be able to add monitoring of my electricity consumption into my database.

Fortunately some kind soul has written a PHP library for the AlertMe API and I have appropriated this and folded it into some code to stick the values it collects into a SQL database.

Now this is a good time to talk about the software I’ll be using for this project. In short, I’m going to write it.

This may seem like a terrible idea for someone who is a terrible coder. But it’s currently looking like the easiest option. One reason is that, having looked at all the off-the-shelf systems none of them do EXACTLY what I want. But more than that, I just don’t understand them. All the new ones seem to be written in Java for people with a very deep knowledge of that language.

So at least in the first instance, I plan to use my very limited knowledge of PHP to hack together a piece of software that will:

  • Collect the data from a variety of sources, including Arduinos and the AlertMe API
  • Stick all this data into a SQL database
  • Provide me with a web interface to examine this data and send commands to the Arduinos to control devices

The plan is to put this together in layers/modules so that I can easily upgrade – for example swapping out the PHP web interface for something like NodeJS to give me more real-time control and updates.

The first piece of code I knocked up took the AlertMe PHP library example and set up a simple WHILE loop to collect data every fifteen minutes and stick it into a table in a SQL database. This is the same format I will use for collecting data from all the other sensors. Ugly? Yes. But by jove it worked almost first time.

]]> 0