Back in November, Crimson’s CIO Executive Search practice hosted a lively evening of dinner, drinks, and discussion at St. Pancras Renaissance Hotel in London. Technology leaders from across the public, private, and voluntary sectors joined us for an engaging evening packed with fascinating dialogue.
Though a handful of different topics were presented for our guests to discuss, time and time again, the conversation came back around to AI and data. Following Chatham House Rules, no individuals or organisations in attendance have been identified here.
So, let’s unpack some of the inspiration that our team took from the evening.
There are plenty of incredibly useful ways that AI can bolster operational efficiency, with examples like automated call and chat attendants; online retail cross-sell algorithms; and predictive asset maintenance. Thankfully, these applications are increasingly well understood by non-technical leaders and teams.
However, these applications - though useful - are relatively surface level. The real value when working with AI comes when you dive beyond mere functionality and pinpoint the precise value that you want to extract.
In order to get the right value from AI applications, the right data needs to be layered underneath - and in such a way that is optimised and sanitised for that particular use case. One IT leader at our event observed that AI failures generally boil down to having poor data or poor processes within the value chain.
Optimising the data you hold isn’t just useful for adopting AI. It has been argued that data is the most valuable commodity in the world, surpassing even the worth of oil. Yet the capability of AI to trawl, analyse, and cross-reference huge amounts of data presents possibilities to extract even more value from preexisting data that already held significant worth.
Organisations often have a lot of data at their disposal but it’s not in a format where it can be efficiently extracted. It may be siloed in myriad different spreadsheets or spread across individual devices
But solving the problem of siloed data isn't just a case of setting up a central data warehouse for all to use and saying “off you go”. There can be cultural and behavioural barriers too, which often boil down to trust. Maybe previous IT failings have left team members mistrustful of centralised, collaborative tools. Maybe interpersonal conflicts have left individuals feeling the need to hoard and “protect” data from another team or individual. Or perhaps poor data practices have simply become habit!
Combating siloed data may therefore require a surprising amount of change management, cultural engagement, and habit forming training. After all, getting people to loosen the reins on “their” workplace data and helping them understand that much more value can be extracted from pooled, collated data can be a lot for some folks to get their heads around.
There’s still a lot of fear, mistrust, and trepidation around artificial intelligence that can be an uphill battle to combat.
Before the generative AI boom that began in 2022, AI’s main presence in the public zeitgeist was within the realms of sci-fi - and it was usually the bad guy. Concepts like Skynet, HAL 9000, and GLaDOS captured our imaginations and stoked fears of evil, all powerful computational intelligences with control over life and death.
But whether inspired by fiction or not, this negative perception of AI is proving hard to shake off. In order to turn this tide within organisations, a pro-AI visionary is needed to grab the reins and seize the potential value!
Organisations curious about AI implementation may benefit from someone taking the role of an AI advocate: someone who can communicate the benefits of AI, answer questions, and allay fears throughout the chain of command.
When a specific AI use case is identified, this advocate can be charged with “building a case” for that implementation; engaging with individuals who will directly benefit from the idea and proving the direct positive upshot of that investment.
Quick side note: if you’re interested in using AI to uncover new, value-generating opportunities, why not download our guide, The AI Strategy Roadmap: Navigating the Stages of Value Creation?
In it, you’ll discover the key stages of building your own tailored strategic AI roadmap, with emerging AI best practices and value creation at its core.
Download the Free Ebook Today!
I think we can all agree that there is tremendous risk to basic duty of care when AI is poorly applied with no guardrails. Yet what those guardrails might look like still largely remains to be seen.
The EU’s AI Act is already in force. The UK seems to be looking at a less statutory “framework” approach. Across the pond, it’s believed that Trump will take a rather laissez-faire attitude to AI safeguards, leading to concerns around diplomatic and military risk. But whatever jurisdiction you’re in, legislative and regulatory factors could well impact what you’re able to do with AI.
This sparks an interesting discussion about the interaction between humans and cyber risk. After all, you can adopt the most robust technical cybersecurity protections known to humankind, but humans are still going to slip up, act differently under pressure, or have an adversarial agenda.
There is an interesting point to be made too about the intersection between organisational culture and cybersecurity. Reportedly, organisations with a more authoritarian “do what I say” approach to leadership may be at higher risk of falling victim to social engineering attacks.
This is an observation shared by Daniel DiGriz at MadPipe (via Digital Guardian):
Companies with an authoritarian hierarchy run more risk for phishing attacks, because employees tend to be cooperative with schemes that sound authoritative. This is also true in some organizational cultures where it's frowned upon to ask for help, there's some degree of mutual distrust, or a less collaborative work model.
It’s worth considering whether these strict organisational cultures are less conducive to the kinds of critical, flexible thinking that enables people to think twice before being reeled in by a phish.
Whatever the reason, in light of the recent generative AI revolution, organisations need to shore up their defences against social engineering attacks. Deepfake technology is increasingly being used to better impersonate individuals and convince victims to do a scammer’s bidding.
Though artificial intelligence as a concept has been around since the 1940s, the strides we see today are of a technology still very much in its infancy. Amazingly, Google already uses AI to create over 25% of the code it generates. What is going to happen to the world (and to the tech talent space) when computers can effectively code themselves? How that affects the tech talent market remains to be seen.
Training and developing large language models is still somewhat limited by energy usage and computational power. Large language models require a lot of energy to train and operate, so balancing advancing technology with increasingly unavoidable climate concerns will become essential.
Organisations are already turning to nuclear energy sources to meet their energy needs, with Microsoft signing a deal to purchase energy from the Three Mile Island plant and Google signing a deal with Kairos Power. Though modern nuclear power plants are considered largely safe, the looming threat of disasters like Chernobyl and Fukushima have left a bad impression on the public conscience.
But could quantum computing overcome both barriers? Recent data suggests that quantum computing could be as much as 100 times more energy efficient than a regular supercomputer in certain situations. If we’ve not reached general artificial intelligence by the time quantum computing hits the mainstream, then it’ll certainly be just around the corner.
With AI, data can be processed and digested at a blisteringly fast pace. Yet human knowledge is always more qualitative and contextually rich than anything a computer can muster.
The relationship between human intelligence and artificial intelligence is never going to stand still. As technology advances, our relationship with it is going to change. And as our habits and attitudes towards tech change, different dynamics in that relationship are going to develop too.
It’s a relationship - and a race - that’s going to be interesting to watch.
Crimson’s CIO Search practice pairs ambitious employers with authentic tech leaders with the passion and talent to transform organisations for the better.
If you’re a CIO, CTO, or IT leader, looking for your next role in the UK, our professional team welcomes you to have a free, confidential consultation with one of our CIO Search experts.
Or if you're an employer looking for exceptional IT leadership talent, please book a free, no obligation chat with our executive search team.
Learn more or book a call here