Opinion

Buzzword overload.

Apr 13, 2019

The not-so-fine line between conceptual accuracy and opportunism.

Languages function because they rely on shared bodies of meaning: common vocabularies. The dissemination of technical terms is, as a result, fundamental within any knowledge field. It lays down the foundation for complex concepts to be easily communicated. For example, the fact that two people know what ‘gradient descent’ is makes it possible for them to more easily discuss machine learning algorithms.  However, take terminology expansions too far and one can quickly stray into murky grounds.

Buzzwords are perhaps the most obvious manifestation of that phenomenon. When used by the technology community, they tend to quickly descend into self-aggrandising spectacles. Strategically dropping the right buzzwords is bound to make you sound cool.

Alas, more often than not that is also bound to make you sound wrong. The eagerness to associate anything and everything with the buzzwords of the moment, when applied by millions of people equally anxious to join the club, will surely corrupt meanings.

“The problem lies in the ever-expanding scope of meanings attributed to buzzwords, which often amount to plain misconstructions.”

The world of data analysis is a case in point. For starters, many people in it have an uncontrollable urge to claim that, up until recently, data science was not around us. Modern tech industries had to emerge and rescue humanity from the shackles of intuition. Inspiring, but not true. Scientific method has been around long enough to show that empirical data analysis is a core foundation of knowledge generation. Data science, under a different name, has existed for a long, long time.

As technologies evolve, data crunching possibilities become more robust. Marketers, investors, journalists and entrepreneurs just could not resist the allure of embracing the ‘big data’ wave. Soon afterwards, ‘machine learning’ started gaining traction: it was now possible to widely deploy statistical techniques that had been invented decades ago. The model for neural network, for example, was created in 1943 – but, back then, the computational power required to handle its brute force calculations was inexistent. Ultimately, it did not take long for a specific type of neural network (a really large one) to spawn a yet more jaw-dropping term: ‘deep learning’. The chart below illustrates how the interest in each one of those terms has evolved since 2010:

In and by itself, the popularisation of buzzwords is not a problem. The problem lies in the ever-expanding scope of meanings attributed to them, which often amount to plain misconstructions.

Big data should be genuinely big in order to warrant its claim. Machine learning is not the same as manually pre-configuring a system to respond to different scenarios. And deep learning does not mean that computers have now conquered the final barrier of wisdom accumulation. It just means that they can handle much bigger neural networks than before.

In Niometrics, we understand the pull from – and the utility of – buzzwords. But we do not like it when they are used at the cost of accurate technology description.

For example, we do use machine learning to generate our network traffic signatures. In this day and age, one could almost by extension claim that we are also doing ‘deep learning’. But we do not use neural networks. Not yet, at least. Other machine learning techniques have been more fitting and scalable to our needs. Intellectual honesty, then, requires us to stick to the machine learning designation. Putting substance over form, and conceptual accuracy over the eagerness to impress with empty claims.

As we push forward with R&D efforts to keep enhancing our network analytics technology, we are bound to legitimately enter new capability spaces. When that happens, we will call them as they should be called. In the meantime, we will stick to a fair and square representation of what we are building – buzzwords aside.