by Fraser Sampson, Activist Post:
In surveillance perspective is everything. Your viewpoint affects what you will see and therefore what you will miss. And we are missing a lot.
Whether you are contemplating a policy, practice or product, you will get a better overall view of the risks and issues by shifting between overlapping perspectives. In remote biometric surveillance the composite view from three specific vantage points can be helpful. Those points are: the technological, the legal and the societal and they can combine to create a richer, clearer image of what is coming and what is already here.
TRUTH LIVES on at https://sgtreport.tv/
Viewing the relevant issue, challenge or proposition from the perspectives of the possible (what can be done), the permissible (what must/must not be done) and the acceptable (what people support/expect to be done) can enhance the relevant features and expose the relative weight being given or assumed from any one perspective. It can also reveal how we got to where we now find ourselves, helping build on successes while swerving some mistakes of the past.
Early policing experimentation in several Western jurisdictions appears to have been driven by the first perspective. With some AI-driven capabilities like facial recognition being fetishised, new technology was adopted with a less than detailed look at where it might fit legally and finally and attempt to persuade the public it was good for them. The result was police forces taking algorithms originally designed to predict aftershocks from earthquakes and using them to predict street robbery. Thinking you can use seismic aftershock predictors in this way without also predicting aftershocks to public trust and confidence is probably the very definition of irony and its impact is still felt today.
Panning across to the legal perspective, in policing and security AI-driven surveillance brings some very specific challenges, an overarching one being accountability. Accountability means answering for decisions not to use available technology as well as for its deployment. Data and privacy regulators like to say: “Just because you can – doesn’t mean you must”. That may be the case for individual and commercial use of technology, but when it comes to policing and security I do not necessarily agree. The state has a legal duty to use readily available means to prevent certain types of harm to the citizen and those ‘means’ arguably include available surveillance capabilities.
And while much attention is paid to the technological and legal perspectives it is societal expectation that will ultimately determine democratic accountability for the technology available to the police being used or eschewed, not least because the UK still has a model of policing based on consent. We, the people, are now using sophisticated surveillance tools once the preserve of state intelligence agencies, routinely and at minimal financial cost.
We freely share personal datasets – including our facial images – with private companies and government on our smart devices for access control, identity verification and threat mitigation. From this societal vantage point it seems reasonable for the police to infer that many citizens not only support them using new remote biometric technology but also expect them to do so, to protect communities, prevent serious harm and detect dangerous offenders – who by the way are also using it to potentially devastating effect. But to what extent is that expectation borne out?