A common theme in Sci-Fi books and films depicts a dystopian rise of the machines, culminating in humans being enslaved or almost destroyed (usually rescued at the last minute by some bright spark with a virus), but as someone working in technology, I've always just assumed that the advances we achieve are by and large a good thing - but I figured it was time to reflect on this.

Theodore John Kaczynski was made a professor at University of California at the age of 25 after graduating from Harvard and the University of Michigan with a PhD in Mathematics. After two years as a professor, he resigned and subsequently went to live as a survivalist recluse in a log cabin in Montana from 1971. Then, in the period 1978-1995, he sent 16 bombs to various targets including scientists and computer engineers, killing three people and injuring 23. In 1995, Kaczynski wrote to the New York times, promising to desist from terrorist activities on the premise that they publish his manifesto entitled "Industrial Society and Its Future", more widely known as "The Unabomber Manifesto". The document was published, his brother recognised his style of writing, informed the FBI and he was convicted and sentenced to life in prison.

The manifesto's main premise is not that robots will turn into killer Terminator-style androids which will inevitably take over the planet and enslave us, but rather that the human race will "drift into a position of such dependence on the machines that it would have no practical choice but to accept all the machines' decisions. As society and the problems that face it become more and more complex, and machines become more and more intelligent, people will let machines make more of their decisions for them simply because machine-made decisions will bring better results than man-made ones". [Thus, the blueprint to the War Operation Plan Response from the 1983 film WarGames, where a US military supercomputer is entrusted with power when officials realise they cannot rely on humans to launch a missile strike in the event of a nuclear attack - ed.]

It goes on to suggest that we would end up in such a position of dependence that we would be unable to function without the machines and their decisions, meaning they are effectively in control - indeed switching them off would cause the fundamental underpinnings of modern society to cease to function... along with us, ultimately.

What I found most chilling about this document is the effect it has had on some industry players and academic experts in Artificial Intelligence (AI); Bill Joy, co funder of Sun Microsystems, was profoundly affected by reading the manifesto, quit his job and also went to live in the wilderness. As did Dylan Evans, an author, academic and expert in AI. Evans sold his house and resigned his post to setup an experiment in sustainable living in Scotland. The aim of the community was to cope with a hypothetical apocalyptic scenario of global banking, societal and governmental breakdown. In the end, the stresses he underwent during this process (which Evans writes about in his book The Utopia Experiment) led him to a mental breakdown. I noted that in the book, Evans described himself as being "radicalised" by the Unabomber Manifesto. These two, intelligent, successful individuals are the just the ones I am aware of, there could potentially be many more, particularly those in less prominent and therefore less noticeable positions. It seems these 35,000 words have a substantial impact on those who read it, despite much of the Ayn Rand-style hyperbole (though to note, Ayn Rand's "Atlas Shrugged" is reported to be a popular source of inspiration amongst Fortune 500 CEOs and young entrepreneurs in Silicon Valley).

I considered the likelihood of those particular passages of the manifesto and constructed a hypothetical scenario that seems plausible to me; we are currently on the brink of automated self-driving cars and lorries which are currently being trialled globally, at the same time as vehicle production facilities are also heavily automated. Rotterdam port, the largest in Europe, is also heavily automated with most of the loading and unloading performed by robots. Increasingly, the design of vehicles, relies on computer software to calculate aerodynamics, ergonomics, and Big Data is used to determine which features are most well received by consumers. In time, one could see that these technologies would be seamlessly integrated, with cars being designed, materials being ordered and delivered, cars being built and then shipped completely autonomously by machines. The only human involvement would be to purchase the car, sit in it and tell it where to go. And there we have it, robots building robots for us.

Any human involvement in any other part of the process would be just flattery. Indeed, if every industry travelled this path, what would we do all day? Would work and money become things of the past, and we would fill our lives with leisure, and maybe some voluntary robot maintenance (no, actually they would have that covered too)?

But of course industry is just one outcome, there is also widespread extant autonomous technology in the field of warfare. The development of autonomous robots that can select targets and "neutralise" them with the empathy of a speed camera is already underway and there are already multiple campaign groups raising profound ethical issues with this technology.

If we consider commerce and banking, well the stock markets are already heavily computerised and automated, as are many of the decisions that affect us as personal banking customers, such as the approval of loans, mortgages and insurance policies. This computer based risk analysis is only likely to become more embedded – as the amount of data required to process these transactions increases, there is no means for a human to compete.

Evans mentions in his book that often the goal of the Automated Intelligence field is to improve social situations, such as providing care and companionship for the elderly, however he himself had a vision of blocks of apartments with a single elderly person in every room on their own with a robot – this is a technical solution to a societal problem we have created ourselves, and feels very much like the wrong answer.

I recognise this process of empowerment with technologies being implemented now across all sectors – the advent of cloud computing has meant that even the mainstream IT profession are abstracted from the "nuts and bolts" of computers – there aren't people getting hands-on fixing servers in data centres anymore. The modern cloud facilities are entirely computer controlled; computing units arrive in sealed shipping containers with built in power and cooling and when the majority of units inside reach end of life, the shipping container is replaced.

Big Data means we can use computers to simplify masses of data collected over the last few decades, allowing us to draw conclusion across different data sets that would never have been possible until recently – but again, undoubtedly that last piece of analysis could probably be undertaken by a machine. At a recent event I spoke at, a representative of a large communications company detailed how they were capturing anonymised data from mobile signals from which commutes, school runs, traffic jams, social trends, spending habits, and so on, could be gathered and used to inform commercial organisations of target audiences and locations for their products and advertisements. This was warmly received by the majority of the audience, but as someone who has always tried to introduce technology to make people's lives better, I found it all a bit unnerving and lacking in social benefit.

So does all this mean, that this spells the end for as an autonomous race ourselves, or simply that this level of progress is required to sustain our race, as a rapidly increasing, currently earthbound population?

Well I guess that depends on us. Just as it is perceived that the proliferation of nuclear weapons has to-date prevented World War III, I'd like to be optimistic in that these machines that some fear will be the end of us, may actually be the things that keep us alive, albeit in a way quite different from now. There is a dichotomy though, in this anticipated future of symbiotic machine dependence – if we cease to exist, what would the purpose of the machines be and would they care if they didn't have a purpose? Maybe there will be lots of robots pondering the meaning of their existence.

It may feel a bit like Star Trek to consider that, but then again - put yourselves in the shoes of one of the many tribal societies that dominated the world not all that long ago, where brutality, violence and higher mortality were predominant features of their lives - I doubt they could have foreseen a future of living to 100 years old in a skyscraper or travelling to the other side of the world in a day. Change is inevitable, but when and what it looks like, well, we'll have to wait and see.

Martin Britton is Prif Swyddog Gwybodaeth (Chief Information Officer) of Natural Resources Wales