30 Mar Keeping watch on AI surveillance
You can’t escape AI surveillance.
China. It would probably be your first answer if you were asked to name a nation that uses artificial intelligence (AI) surveillance. And you’d be right. But many other countries are adopting it too.
According to a report by Carnegie Endowment for International Peace, of the 176 countries it researched about AI surveillance, at least 75 were actively using it. And we’re not just talking about autocratic states like China and Russia. 51% of these were advanced democracies – including the UK.
On top of that, you’ve got global corporations (Amazon, Google, Facebook etc.) and many other businesses using it for financial gain. You can’t avoid AI surveillance. Wherever you are in the world.
In this paper, we’ll examine the technology’s capabilities, the benefits and risks to society, and the danger of the valuable data it generates being stolen.
The Chinese connection.
This is unlikely to come as a surprise – China is the biggest developer of AI surveillance technology. And the largest exporter too. Four Chinese companies (Huawei, Hikvision, Dahua, and ZTE) supply the tech to 63 countries. Of which, Huawei supplies 50.
Soft loans are also used to encourage governments, who typically couldn’t afford the technology, to adopt it. Countries like Uzbekistan, Uganda, Kenya and Mongolia. Raising concerns that the Chinese government could be involved with subsidising the sale of advanced repressive technology.
But China isn’t alone. The US is up there too. It supplies 32 countries with AI surveillance technologies. The major players being IBM (11 countries), Palantir (nine countries), and Cisco (six countries).
AI surveillance is happening in several forms. Predominantly using cameras and through electronic devices that connect to the internet. Your phone. Your TV. Your laptop. Your fridge. Your washing machine. And so on.
Smile – you’re on camera – again, and again…
AI surveillance cameras are far more advanced than traditional CCTV ones. Some have greater functionality and performance than others, as with any technology, yet they all share the ability to be programmed with a series of algorithms.
An AI surveillance camera compares what it is seeing live, with the reference material it’s programmed with. For example, it can detect if a human (potential intruder) has walked into its field of vision and raise an alarm. But if a cat or other form of wildlife does, it does nothing. That’s because it knows the characteristics of a human. Such as approximate height and width, that it has two arms and legs, it’s vertical not horizontal etc.
Moving further up the complexity scale is ‘behavioural analytics’. This is total self-learning software – with no initial programming. The AI surveillance camera learns what is ‘normal’ for people, vehicles, machines and the environment based on its own observations. After several weeks of learning, it recognises when something out of the ordinary happens.
For example, a person lying on the floor in the middle of the street, surrounded by several others. Maybe they collapsed and are being helped by passers-by? Or are they being assaulted by a gang? Either way, a human operator is notified of the unusual behaviour.
Where would we be without the internet?
Today, it’s hard to imagine life without the internet. How did we cope before the millennium? We’re using numerous digital platforms every day (hour and minute), to have fun, to learn, to work, to discover, to make life easier – and so much more.
On Amazon, you can buy something in one click and have it delivered for free the next day. With Facebook, you know exactly what friends and family are doing or thinking. Google lets you find anything and everything in seconds. And you can binge-watch box sets for hours (days), thanks to Netflix.
Then there’s all the other things we use the internet for. Turning the lights on. Switching the washing machine off. It’s an intrinsic part (if not the driving force) of our lives.
Yet, every time you connect to the internet, AI surveillance could be (is most likely) happening. Netflix kindly makes movie recommendations, based on what you’ve watched previously. Your smart fridge automatically orders milk based on your consumption behaviour. Facebook bombards you with ads for a product you briefly chatted to a colleague about while making a coffee. AI surveillance is happening in so many ways. But do people care? Do they know? If so, do they just accept it? Are they blinkered and only see the positives?
Looking at how AI surveillance can benefit our society.
Knowing just some of the possibilities, you can easily see how AI surveillance can be extremely useful. Improving security is an obvious key benefit. The police and government agencies can use it to find criminals or terrorists. In August 2019, New York police officers used facial recognition to apprehend an alleged rapist within 24 hours.
Law-enforcement agencies can even use AI surveillance to predict when crimes might occur. Keeping law-abiding citizens safer still.
There are many other positive uses of AI surveillance. For the public and businesses. It’s minimising congestion by changing traffic light phasing in response to real-time activity. Retailers are maximising sales by analysing browsing patterns and footfall, then changing pricing and product positioning based on real-time customer movements. There’s a London bar using AI surveillance to ensure you get served in the right order.
Another strong plus-point is its accuracy. It eliminates ‘human error’. It can run 24/7 at optimum performance, whereas a security guard can’t monitor a huge bank of screens, with full concentration for the duration of their shift.
The technology that gives an advantage, but can also take advantage.
AI surveillance has transformed the abilities of governments and corporations to monitor people and systems. Probably the biggest concern surrounding it is how an individual’s privacy and civil liberties are compromised.
Some autocratic countries are exploiting it for mass surveillance purposes. Including the likes of Russia, China and Saudi Arabia. But advanced democracies and large businesses are using it for their own benefit too, without the full (if any) consent of those it’s monitoring.
People are unwittingly using free services, such as Google and Facebook, and providing these companies with countless amounts of data that is being used for capital gain. Something the American author and scholar Shoshana Zuboff has coined ‘surveillance capitalism’. She writes: “…unilaterally claims human experience as free raw material for translation into behavioural data. Although some of these data are applied to service improvement, the rest are declared as a proprietary behavioural surplus, fed into advanced manufacturing processes known as ‘machine intelligence’, and fabricated into prediction products that anticipate what you will do now, soon, and later.”.
There are also fears about facial recognition misidentifying people. And while the technology has come a long way in recent years (it was 20 times better in 2018 compared to 2014), it’s still not without faults. One being its ineffectiveness to identify people of colour or women, compared to white males.
Due to civil liberty concerns and accuracy, some US cities such as San Francisco and Oakland in California, have banned government departments from using facial recognition software.
AI surveillance improves security – but what about the security of its data?
Whether you believe AI surveillance is good or bad, one thing that is indisputable – the vast amount of data it gathers. All of which is highly personal. And extremely valuable.
Not only do you require an expansive solution to store this data, it should be extremely secure too. It’s at constant risk of being stolen. And as we’ve discovered, this information is very powerful in ‘good’ hands. So if (when) it falls into ‘bad’ ones, the impact on individuals, businesses, and entire economies, is frightening.
Unfortunately, the hacking of AI surveillance data is happening. In 2019, there was a major breach in a biometrics system used by banks, UK police and defence firms. Israeli security researchers, that review virtual private network services, found the database was unprotected and mostly unencrypted. They were able to access over 27.8m records. This included facial recognition, fingerprint and personal data. Fortunately, this was just a test and it was reported. But who knows the true scale of malicious hacks on AI surveillance data?
What’s your view on AI surveillance?
Is your business currently using AI surveillance? Are you considering it? We’d love to hear your thoughts on the subject and find out what measures you’re implementing to protect your highly valuable and sensitive data.