INSEAD Business School, along with Cornell University and the World Intellectual Property Organisation have just released their analysis of the world’s most innovative countries for 2016.

Together, this is called the Global Innovation Index and aims to objectively measure several dozen indicators to assess which countries are comparatively more innovative than others.

You may have seen graphics being forwarded around Linkedin, with a chart displaying the top rankings of “most innovative countries 2016” next to a cross-section of random index numbers, declaring that once again Switzerland is the most innovative country in the world.

First of all, I want to congratulate these institutions and the hundreds of collaborators who work on not only the report, but also compiling the thousands of data points to make the comparative analysis possible.

For innovation professionals like myself, this data trove is a goldmine for uncovering trends which are happening across the world, especially as you can compare current data to previous years to find both local and global trends. The full report (which you can download for free at the above link) is over 451 pages long and has a substantial level of detail, for each country and also each index they use to form the total. Here you can see how the top 10 countries’ positions have changed over the past 4 years, and how Switzerland has remained steady at the top.

Top 10 Most Innovative Countries for the past 4 years, GII 2016

Top 10 Most Innovative Countries 2016, GII 2016

The analysis itself is also often top-notch, with top economists sifting through both the micro-economic and macro-economic data to find the underlying trends. For example, the trend for the 2016 report was “Winning with Global Innovation”, and it showed how R&D is becoming more open and geographically diverse.

However, as you may have guessed from the title, I have one major concern with the foundation of the index which results in me taking its findings (like all other reports I analyse) with a grain of salt:

In order to stay unbiased and objective in producing this index, the analysts rely on a series of available numerical datasets available to compare countries. This limited availability means that the analysts need to choose which ones represent the “innovativeness” of a country, and it is here that in my opinion the analysts have put too much emphasis on macroeconomic and socio-political data-points, and too little on data points which actually relate to achieveing innovation.

The problem with extrapolating data

I spent 8 years as a management consultant, and a number of my projects were referred to as “Strategy Consulting”. Here, we would be asked to make predictions for a specific topic, such as the likely growth rates for an industry segment based on trends.

In order to make such an assessment, you need two primary things:

  1. Baseline and historical data points to identify what the data is telling you (e.g. comparative benchmarks, industry analysis)
  2. Insight into what the data is telling you, and how external future forces and current trends may affect the industry (scenarios such as disruptive new technology, price drops or changes in consumer behaviour)

The issue arises in that part 1 is quite impartial, unbiased and objective, as it is primarily numerical and based on current available data. However, part 2 is where humans like me get their grubby hands on these numbers and start using intelligence and insight to create future “scenarios” which may or may not be true. These scenarios are therefore more subjective, and will vary based on the people who provided the insight, and the thought processes and intelligence of those people. This is why consultants are paid so much. The more evidence that is available to back up the insights, the lower the risk that they could be wrong, but they will never be 100% guaranteed.

Reports like the GII attempt to be completely impartial, and to achieve that they aim to focus on the available data (part 1) and remove as much of the human opinion (part 2) as possible.

The way they achieve this is by looking at available data sets which provide indicators which they have chosen to represent innovation. They then assess each country against those indicators, and rank the final results to see which countries lead in which indicators. The total weightings of the various indexes which then result in a net rank for each country compared to each other country, allowing the GII team to say which country achieves the overall highest Innovation Efficiency Ratio ranking (it’s Switzerland).

The indicators used by the Report are divided into sub-categories (with numerous other sub-indicators) as follows:

  1. Innovation Inputs
    1. Institutions
    2. Human Capital and Research
    3. Infrastructure
    4. Market Sophistication
    5. Business Sophistication
  2. Innovation Outputs
    1. Knowledge and Technology Outputs
    2. Creative Outputs

As I’ve previously stated, the underlying data is sound. It is not all completely quantitative, with some qualitative assessments as well, such as ranking the quality of universities against each other.

And one thing I cannot argue against is that the wealth of data enables the team to provide some very interesting assessments of innovation. For example, take a look at this chart from page 60 of the report, which shows the GII score compared to the GDP per capita of each country. It shows clear groupings of countries which are either underperforming relative to GDP, achieving innovation, or innovation leaders.

Innovation Leader, Achievers and Underperformers, GII 2016

Innovation Leader, Achievers and Underperformers, GII 2016

However, the insights you gain from the ranking are ultimately always based on the data in the rankings. The problem starts arising when you look at the methodology used in the report, and which indicators have actually been used to represent innovation. In my view, some of them are somewhat poor innovation indicators, but have been chosen simply because they were the data which was available.

What is innovation?

Here at Idea to Value, we have a very clear view on the definition of Innovation. How did we get it? By asking 15 of the world’s top innovation experts, and finding the common definition, which is as follows:

Executing an idea which addresses a specific challenge and achieves value for both the company and the customer

Innovation is Executing an idea which addresses a specific challenge and achieves value for both the company and the customer

This puts the emphasis of innovation on value delivery.

If you look at the definition of innovation used in the Global Innovation Index, it is based on the Oslo Manual developed by the European Communities and the Organisation for Economic Co-operation and Development (OECD), and is as follows:

An Innovation is the implementation of a new or significantly improved product (good or service), a new process, a new marketing method, or a new organisational method in business practices, workplace organisation, or external relations.

Firstly, my issue with the underlying definition is that it is much too fixated on “newness”. According to it, anything new would be classified as an innovation, regardless of whether it adds value and is accepted by the market.

But the underlying issue with the concept is further elaborated on in the same section, where the authors admit:

Measuring innovation outputs and impacts remains difficult, hence great emphasis is placed on measuring the climate and infrastructure for innovation and on assessing related outcomes.

Here is the crux of the issue with the report: getting data on innovation outputs is hard, so the report is based predominantly on non-innovation metrics for which data is available. These are mainly macroeconomic data and socio-political data sets.

Having looked through them, some of these data sets do indeed have a clear correlation with innovation as a value-based paradigm, which I am fine with.

For example, some Innovation Inputs such as (not exhaustive list):

  • 1.3.1 – Ease of starting a business
  • 2.1.1 – Expenditure on education, % of GDP
  • 2.2.2 – Graduates in science & engineering, %
  • 2.3.2 – Gross expenditure on R&D, % GDP
  • 3.1.2 – ICT Use
  • 4.1.1 – Ease of Getting Credit
  • 4.2.1 – Ease of protecting minority investors
  • 5.1.1 – Knowledge-intensive employment, %
  • 5.3.1 – Intellectual property payments, % total trade

However, the issue is that for all of the good indices which have a direct correlation to innovation, there are also a number of other indices which in my view are much more focused on general economic health and strength. The authors make the argument that this underlying climate and infrastructure is vital for innovation, and to a degree I agree with them. However, I don’t fully support the idea that the following (not exhaustive) list of other Innovation inputs actually relate much to innovation specifically:

  • 1.1.1 – Political Stability and safety
  • 3.1.3 – Government’s online service
  • 3.2.1 – Electricity output, kWh/cap
  • 3.3.2 – Environmental performance
  • 4.2.3 – Total value of stocks traded, % GDP
  • 4.3.3 – Domestic market scale, bn PPP$

Yes, the values above can be used to make an assessment of the strength of the economy. But as the authors have admitted, it is not specifically related to a country or individual’s ability to innovate. In fact, in many cases innovation is actually spurred on by challenges existing in a local market which need a novel solution, such as SMS-based mobile payments in African countries with neither strong financial institutions or widespread and affordable ICT access. A number of the indices simply muddy the waters by adding in ranking factors which are less (or not at all) related to things which will increase or decrease a person’s ability to come up with and execute on value-adding ideas. The ranking would be more accurate if these factors were removed, leaving a smaller but more focused dataset.

The challenge in measuring innovation outputs

Even more issues arise when we look at the index data used to assess innovation output, which the authors also admitted was hard to get hold of.

This section is split into two as well, comparing sub-indicators for “Knowledge & Technology Outputs” and “Creative Outputs”.

“Knowledge & Technology Outputs” are predominantly made of very good innovation-related indicators, including information on:

  • 6.1.1 – Patents by origin/ bn PPP$ GDP
  • 6.1.4 – Scientific & Technical articles/bn PPP$ GDP
  • 6.2.2 – New Businesses/th pop 15-64 (an especially good one in my view)
  • 6.2.5 – High- & Medium-high-tech manufacturers, %
  • 6.3.2 – High-tech exports less re-exports, % total trade

On the whole, this section gives a good indicator of how much a country is contributing to research and new product development. With the data available, this is probably as good as it is possible to get.

However, the Creative Outputs section is another story. Here we find indicators such as:

  • 7.2.2 – National Feature films/ mm pop 15-69
  • 7.2.4 – Printing & publishing manufacturers, %
  • 7.3.3 – Wikipedia monthly edits/mm pop 15-69
  • 7.3.4 – Video uploads on Youtube/pop 15-69

This is where the lack of coherent, comparable data becomes the most clear. Many of these indicators are culturally biased, based on language and local tastes. These cultural output indexes also relate only remotely to the concept of innovation, and in the case of printing and publishing are actually measuring an index for an industry which is in decline thanks to innovation in digital.

The authors have had to rely on data sources available, and acknowledge its fundamental limitations when they admit:

Attempts made to strengthen this sub-pillar with indicators in areas such as Internet and machine learning, blog posting, online gaming, and the development of applications have so far proved unsuccessful.

So the good news is that the authors are aware of the limitations inherent in the report. I wish them good luck in continually finding new ways to improve their data. However, at the moment the rankings and the data they are based on must be taken with a grain of salt.

Is it measuring innovation or business efficiency?

My final point relates to the question which comes up when analysing the results of the GII itself.

Now, please don’t think I’m giving grief to Switzerland, but they are not a country you associate with high technology and world-changing design innovations. They are much more a country you think of when it comes to good governance and efficiency of doing business, which so many of the macro-economic indicators in this list relate to. Which makes me ask the question of how much the rankings in this list are the result of innovations from the country itself, and how much it is based on large international businesses being set up with Headquarters in Switzerland for tax purposes.

Let me give you an example. I previously worked for many years at Deloitte, one of the world’s largest professional services firms doing audit, consulting and advisory for almost every major global company and employing more than 200,000 people. Yet the company was restructured to be a Swiss Verein, so that taxes would be as low as possible. The vast, vast majority of work is being done inside each individual country at a local level, yet Switzerland ends up with a larger number of headquarters with people not doing much of the progressive, high-tech work there. And this will certainly also be true of things like patents filed on the companies’ behalf. Many other large companies employ similar tactics.

This doesn’t mean that Switzerland isn’t an innovative place. It certainly is, having a large number of impressive universities and research hubs. However, when you look at where new, innovative companies are actually developing new technology on the ground, you are more likely to find it in cities like Berlin, London, San Fransisco, Tel Aviv, Nairobi and Singapore, rather than Zurich. This just highlights to me the limitations in using the macroeconomic data which the GII relies upon so heavily.

You should still have a look at the report

Now, don’t think that I don’t like or respect the GII. I think it’s fantastic, and am continuously finding out more golden nuggets of information from it.

If you are interested in Innovation, it’s a vital read, and I strongly suggest you download it.

However, I would not recommend you use it as the sole resource for making any important innovation-based decisions, like where to invest in a new R&D facility or go after a new market. The data is not geared towards that.

I hope that after reading this, you’ll be able to get the value from the report and still understand its inherent limitations.

Did you know that scientific evidence shows your creativity decreases over time

Idea to Value Podcast: Listen and Subscribe now

Listen and Subscribe to the Idea to Value Podcast. The best expert insights on Creativity and Innovation. If you like them, please leave us a review as well.
The following two tabs change content below.
Creativity & Innovation expert: I help individuals and companies build their creativity and innovation capabilities, so you can develop the next breakthrough idea which customers love. Chief Editor of Ideatovalue.com and Founder / CEO of Improvides Innovation Consulting. Coach / Speaker / Author / TEDx Speaker / Voted as one of the most influential innovation bloggers.

Latest posts by Nick Skillicorn (see all)